Sep 30, 2007

List of device bandwidths

List of device bandwidths (Wikipedia)

This is a list of device bandwidths: the channel capacity (or, more informally, bandwidth) of some computer devices employing methods of data transport is listed by bit/s, kilobit/s (kbit/s), megabit/s (Mbit/s), or gigabit/s (Gbit/s) as appropriate and also MB/s or megabytes per second. They are listed in order from lowest bandwidth to highest.

Whether to use bit/s or byte/s (B/s) is often a matter of convention. The most commonly cited measurement is bolded. In general, parallel interfaces are quoted in byte/s (B/s), serial in bit/s. On devices like modems, bytes may be more than 8 bits long because they may be individually padded out with additional start and stop bits; the figures below will reflect this. Where channels use line codes, such as Ethernet, Serial ATA and PCI Express, quoted speeds are for the decoded signal.

Many of these figures are theoretical maxima, and various real-world considerations will generally keep the actual effective throughput much lower. The actual throughput achievable on Ethernet networks, for example (especially when heavily loaded or when running over substandard media), is debatable. The figures are also simplex speeds, which may conflict with the duplex speeds vendors sometimes use in promotional materials.

All of the figures listed here are true metric quantities and not binary prefixes (1 kilobit, for example, is 1000 bits, not 1024 bits). Similarly, kB, MB, GB mean kilobytes, megabytes, gigabytes, not kibibytes, mebibytes, gibibytes.

List..

The Common System Interface: Intel's Future Interconnect

By: David Kanter (dkanter@realworldtech.com)
Updated: 08-28-2007

Introduction


In the competitive x86 microprocessor market, there are always swings and shifts based on the latest introductions from the two main protagonists: Intel and AMD. The next anticipated shift is coming in 2008-9 when Intel will finally replace their front side bus architecture. This report details Intel’s next generation system interconnect and the associated cache coherency protocol, likely deployment plans across the desktop, notebook and server market as well as the economic implications.

Intel’s front-side bus has a long history that dates back to 1995 with the release of the Pentium Pro (P6). The P6 was the first processor to offer cheap and effective multiprocessing support; up to four CPUs could be connected to a single shared bus with very little additional effort for an OEM. The importance of cheap and effective cannot be underestimated. Before the P6, multiprocessor systems used special chipsets and usually a proprietary variant of UNIX; consequently they were quite expensive. Initially, Intel’s P6 could not always match the performance of these high end systems from the likes of IBM, DEC or Sun, but the price was so much lower that the performance gap became a secondary consideration. The workstation and low-end server markets embraced the P6 precisely because the front-side bus enabled inexpensive multiprocessors.

Ironically, the P6 bus was the subject of considerable controversy at Intel. It was originally based on the bus used in the i960 project and the designers came under pressure from various corporate factions to re-use the bus from the original Pentium so that OEMs would not have to redesign and validate new motherboards, and so end users could easily upgrade. However, the Pentium bus was strictly in-order and could only have a single memory access in flight at once, making it entirely inadequate for an out-of-order microprocessor like the P6 that would have many simultaneous memory accesses. Ultimately a compromise was reached that preserved most of the original P6 bus design, and the split-transaction P6 bus is still being used in new products 10 years after the design was started. The next step for Intel’s front side bus was to shift to the P4 bus, which was electrically similar to the P6 bus and issued commands at roughly the same rate, but clocked the data bus four times faster to provide fairly impressive throughput.

While the inexpensive P4 bus is still in use for Intel’s x86 processors, the rest of the world moved on to newer point-to-point interconnects rather than shared buses. Compared to systems based on HP’s EV7 and more importantly AMD’s Opteron, Intel’s front-side bus shows it age; it simply does not scale as well. Intel’s own Xeon and Xeon MP chipsets illustrate the point quite well ? as both use two separate front-side bus segments in order to provide enough bandwidth to feed all the processors. Similarly, Intel designed all of their MPUs with relatively large caches to reduce the pressure on the front-side bus and memory systems, exemplified by the cases of the Xeon MP and Itanium 2, sporting 16MB and 24MB of L3 cache respectively. While some critics claim that Intel is pushing an archaic solution and patchwork fixes on the industry, the truth is that this is simply a replay of the issues surrounding the Pentium versus P6 bus debate writ large. The P4 bus is vastly simpler and less expensive than a higher performance, point-to-point interconnect, such as HyperTransport or CSI. After 10 years of shipping products, there is a massive amount of knowledge and infrastructure invested in the front-side bus architecture, both at Intel and at strategic partners. Tossing out the front-side bus will force everyone back to square one. Intel opted to defer this transition by increasing cache sizes, adding more bus segments and including snoop filters to create competitive products.

While Intel’s platform engineers devised more and more creative ways to improve multiprocessor performance using the front-side bus, a highly scalable next generation interconnect was being jointly designed by engineers from teams across Intel and some of the former Alpha designers acquired from Compaq. This new interconnect, known internally as the Common System Interface (CSI), is explicitly designed to accommodate integrated memory controllers and distributed shared memory. CSI will be used as the internal fabric for almost all future Intel systems starting with Tukwila, an Itanium processor and Nehalem, an enhanced derivative of the Core microarchitecture, slated for 2008. Not only will CSI be the cache coherent fabric between processors, but versions will be used to connect I/O chips and other devices in Intel based systems.

The design goals for CSI are rather intimidating. Roughly 90% of all servers sold use four or fewer sockets, and that is where Intel faces the greatest competition. For these systems, CSI must provide low latency and high bandwidth while keeping costs attractive. On the other end of the spectrum, high-end systems using Xeon MP and Itanium processors are intended for mission critical deployment and require extreme scalability and reliability for configurations as large as 2048 processors, but customers are willing to pay extra for those benefits. Many of the techniques that larger systems use to ensure reliability and scalability are more complicated than necessary for smaller servers (let alone notebooks or desktops), producing somewhat contradictory objectives. Consequently, it should be no surprise that CSI is not a single implementation, but rather a closely related family of implementations that will serve as the backbone of Intel’s architectures for the coming years.


Physical Layer


Unlike the front-side bus, CSI is a cleanly defined, layered network fabric used to communicate between various agents. These ‘agents’ may be microprocessors, coprocessors, FPGAs, chipsets, or generally any device with a CSI port. There are five distinct layers in the CSI stack, from lowest to highest: Physical, Link, Routing, Transport and Protocol [27]. Table 1 below describes the different layers and responsibilities of each layer.



Table 1 ? Common System Interface Layers

While all five layers are clearly defined, they are not all necessary. For example, the routing layer is optional in less complex systems, such as a desktop, where there are only two CSI agents (the MPU and chipset). Similarly, in situations where all CSI agents are directly connected, the transport layer is redundant, as end-to-end reliability is equivalent to link layer reliability.

CSI is defined as a variable width, point to point, packet-based interface implemented as two uni-directional links with low-voltage differential signaling. A full width CSI link is physically configured with 20 bit lanes in each direction; these bit lanes are divided into four quadrants of 5 bit lanes, as depicted in Figure 1 [25]. While most CSI links are full width, half width (2 quadrants) and quarter width (a single quadrant) options are also possible. Reduced width links will likely be used for connecting MPUs and chipset components. Additionally, some CSI ports can be bifurcated, so that they can connect to two different agents (for example, so that an I/O hub can directly connect to two different MPUs) [25]. The width of the link determines the physical unit of information transfer, or phit, which can be 5, 10 or 20 bits.



Figure 1 ? Anatomy of a CSI Link

In order to accommodate various link widths (and hence phit sizes) and bit orderings, each nibble of output is muxed on-chip before being transmitted across the physical transmission pins and the inverse is done on the receive side [25]. The nibble muxing eliminates trace length mismatches which reduces skew and improves performance. To support port bifurcation efficiently, the bit lanes are swizzled to avoid excessive wire crossings which would require additional layers in motherboards. Together these two techniques permit a CSI port to reverse the pins (i.e. send output for pin 0 to pin 19, etc.), which is needed when the processor sockets are mounted on both sides of a motherboard.

CSI is largely defined in a way that does not require a particular clocking mechanism for the physical layer. This is essential to balance current latency requirements, which tend to favor parallel interfaces, against future scalability, which requires truly serial technology. Clock encoding and clock and data recovery are prerequisites for optical interconnects, which will eventually be used to overcome the limitations of copper. By specifying CSI in an expansive fashion, the architects created a protocol stack that can naturally be extended from a parallel implementation over copper to optical communication.

Initial implementations appear to use clock forwarding, probably with one clock lane per quadrant to reduce skew and enable certain power saving techniques [16] [19] [27]. While some documents reference a single clock lane for the entire link, this seems unlikely as it would require much tighter skew margins between different data lanes. This would result in more restrictive board design rules and more expensive motherboards.

When a CSI link first boots up, it goes through a handshake based physical layer calibration and training process [14] [15]. Initially, the link is treated like a collection of independent serial lanes. Then the transmitter sends several specially designed phit patterns that will determine and communicate back the intra-lane skew and detect any lane failures. This information is used to train the receiver circuitry to compensate for skew between the different lanes that may arise due to different trace lengths, and process, temperature and voltage variation. Once the link has been trained, it then begins to operate as a parallel interface, and the circuitry used for training is shut down to save power. The link and any de-skewing circuitry will also be periodically recalibrated, based on a timing counter; according to one patent this counter triggers every 1-10ms [13]. When the retraining occurs, all higher level functionality including flow control and data transmission is temporarily halted. This skew compensation enables motherboard designs with less restrictive design rules for trace length matching, which are less expensive as a result.

It appears some variants of CSI can designate data lanes as alternate clocking lanes, in case of a clock failure [16]. In that situation, the transmitter and receiver would then disable the alternate clock lane and probably that lane’s whole quadrant. The link would then re-initialize at reduced width, using the alternate clock lane for clock forwarding, albeit with reduced data bandwidth. The advantage is that clock failures are no longer fatal, and gracefully degrade service in the same manner as a data lane failure, which can be handled through virtualization techniques in the link layer.

Initial CSI implementations in Intel’s 65nm and 45nm high performance CMOS processes target 4.8-6.4GT/s operation, thus providing 12-16GB/s of bandwidth in each direction and 24-32GB/s for each link [30] [33]. Compared to the parallel P4 bus, CSI uses vastly fewer pins running at much higher data rates, which not only simplifies board routing, but also makes more CPU pins available for power and ground.


Link Layer


The CSI link layer is concerned with reliably sending data between two directly connected ports, and virtualizing the physical layer. Protocol packets from the higher layers of CSI are transmitted as a series of 80 bit flow control units (flits) [25]. Depending on the width of a physical link, transmitting each flit takes either 4, 8 or 16 cycles. A single flit can contain up to 64 bits of data payload, the remaining 16 bits in the header are used for flow control, packet interleave, virtual networks and channels, error detection and other purposes [20] [22]. A higher level protocol packet can consist of as little as a single flit, for control messages, power management and the like, but could include a whole cache line ? which is currently 64B for x86 MPUs and 128B for IPF.

Flow control and error detection/correction are part of the CSI link layer, and operate between each transmitter and receiver pair. CSI uses a credit based flow control system to detect errors and avoid collisions or other quality-of-service issues [22] [34]. CSI links have a number of virtual channels, which can form different virtual networks [8]. These virtual channels are used to ensure deadlock free routing, and to group traffic according to various characteristics, such as transaction size or type, coherency, ordering rules and other information [23]. These particular details are intertwined with other aspects of CSI-based systems and are discussed later. To reduce the storage requirements for the different virtual channels, CSI uses two level adaptive buffering. Each virtual channel has a small dedicated buffer, and all channels share a larger buffer pool [8].

Under ordinary conditions, the transmitter will first acquire enough credits to send an entire packet ? as previously noted this could be anywhere from 1-18+ flits. The flits will be transmitted to the receiver, and also copied into a retry buffer. Every flit is protected by an 8 bit CRC (or 16 bits in some cases), which will alert the receiver to corruption or transmission errors. When the receiver gets the flits, it will compute the CRC to check that the data is correct. If everything is clear, the receiver will send an acknowledgement (and credits) with the next flit that goes from the receiver-side to the transmitter-side (remember, there are two uni-directional links). Then the transmitter will clear the flits out of the retry buffer window. If the CRC indicates an error, the receiver-side will send a link layer retry request to the transmitter-side. The transmitter then begins resending the contents of the retry buffer until the flits have been correctly received and acknowledged. Figure 2 below shows an example of several transactions occurring across a CSI link using the flow control counters.



Figure 2 ? Flow Control Example, [34]

While CSI’s flow control mechanisms will prevent serious contention, they do not necessarily guarantee low latency. To ensure that high priority control packets are not blocked by longer latency data packets, CSI incorporates packet interleaving in the link layer [21]. A bit in the flit header indicates whether the flit belongs to a normal or interleaved packet. For example, if a CSI agent is sending a 64B cache line (8+ flits) and it must send out a cache coherency snoop, rather than delaying the snoop by 8 or more cycles, it could interleave the snoop. This would significantly improve the latency of the snoop, while barely slowing down the data transmission. Similarly, this technique could be used to interleave multiple data streams so that they arrive in a synchronized fashion, or simply to reduce the variance in packet latency.

The link layer can also virtualize the underlying physical bit lanes. This is done by turning off some of the physical transmitter lanes, and assigning these bit lanes either a static logical value, or a value based on the remaining bits in each phit [24]. For example, a failed data lane could be removed, and replaced by one of the lanes which sends CRC, thus avoiding any data pollution or power consumption as a result. The link would then continue to function with reduced CRC protection, similar to the failover mechanisms for FB-DIMMs.

Once the physical layer has been calibrated and trained, as discussed previously, the link layer goes through an initialization process [31]. The link layer is configured to auto-negotiate and exchange various parameters, which are needed for operation [20]. Table 2 below is a list of some (but not necessarily all) of the parameters that each link will negotiate. The process starts with each side of the link assuming the default values, and then negotiating which values to actually use during normal operation. The link layer can also issue an in-band reset command, which stops the clock forwarding, and forces the link to recalibrate the physical layer and then re-initialize the link layer.



Table 2 ? CSI Link Layer Parameters and Values, [20]

Most of these parameters are fairly straight forward. The only one that has not been discussed is the agent profile. This field characterizes the role of the device and contains other link level information, which is used to optimize for specific roles. For example, a ”mobile” profile agent would likely have much more aggressive power saving features, than a desktop part. Similarly, a server agent might disable some prefetch techniques that are effectively for multi-media workloads but tend to reduce performance for more typical server applications. Additionally, the two CSI agents will communicate what ‘type’ each one belongs to. Some different types would include caching agents, memory agents, I/O agents, and other agents that are defined by the CSI specification.


Power Saving Techniques


Power and heat have become first order constraints in modern MPU design, and are equally important in the design of interconnects. Thus it should come as no surprise that CSI has a variety of power saving techniques, which tend to span both the physical and link layer.

The most obvious technique to reduce power is using reduced or low-power states, just as in a microprocessor. CSI incorporates at least two different power states which boost efficiency by offering various degrees of power reduction and wake-up penalties for each side of a link [17]. The intermediate power state, L0s, saves some power, but has a relatively short wake-up time to accommodate brief periods of inactivity [28]. There are several triggers for entering the L0s state; they can include software commands, an empty or near empty transaction queue (at the transmitter), a protocol message from an agent, etc. When the CSI link enters the L0s state, an analog wake-up detection circuit is activated, which monitors for any signals which would trigger an exit from L0s. Additionally, a wake-up can be caused by flits entering the transaction queue.

During normal operation, even if one half of the link is inactive, it will still have to send idle flits to the other side to maintain flow control and provide acknowledgement that flits are being received correctly. In the L0s state, the link stops flow control temporarily and can shut down some, but not all, of the circuitry associated with the physical layer. Circuits are powered down based on whether or not they can be woken up within a predetermined period of time. This wake-up timer is configurable and the value likely depends upon factors such as the target market (mobile, desktop or server) and power source (AC versus battery). For instance, the bit lanes can generally be held in an electrical idle so they do not consume any power. However, the clock recovery circuits (receiver side PLLs or DLLs) must be kept active and periodically recalibrated. This ensures that when the link is activated, no physical layer initialization is required, which keeps the wake up latency relatively low. Generally, increasing the timer would improve the power consumption in L0s, but could negatively impact performance. Intel’s patents indicate that the wake-up timer can be set as low as 20ns, or roughly 96-128 cycles [19].

For more dramatic power savings, CSI links can be put into a low power state. The L1 state is optimized specifically for the lowest possible power, without regard for the wake-up latency, as it is intended to be used for prolonged idle periods. The biggest difference between the L0s and L1 states is that in the latter, the DLLs or PLLs used for clock recovery and skew compensation are turned off. This means that the physical layer of the link must be retrained when it is turned back on, which is fairly expensive in terms of latency ? roughly 10us [17]. However, the benefit is that the link barely dissipates any power when in the L1 state. Figure 3 shows a state diagram for the links, including various resets and the L1 and L0s states.



Figure 3 ? CSI Initialization and Power Saving State Diagram, [19]

Another power saving trick for CSI addresses situations where the link is underutilized, but must remain operational. Intel’s engineers designed CSI so that the link width can be dynamically modulated [27]. This is not too difficult, since the physical link between two CSI agents can vary between 5, 10 and 20 bits wide and the link layer must be able to efficiently accommodate each configuration. The only additional work is designing a mechanism to switch between full, half and quarter-width and ensuring that the link will operate correctly during and after a width change.

Note that width modulation is separate for each unidirectional portion of a link, so one direction might be wider to provide more bandwidth, while the opposite direction is mostly inactive. When the link layer is auto-negotiating, each CSI agent will keep track of the configurations supported by the other side (i.e. full width, half-width, quarter-width). Once the link has been established and is operating, each transmitter will periodically check to see if there is an opportunity to save power, or if more bandwidth is required.

If the link bandwidth is not being used, then the transmitter will select a narrower link configuration that is mutually supported and notify the receiver. Then the transmitter will modulate to a new width, and place the inactivated quadrants into the L0s or L1 power saving states, and the receiver will follow suit. One interesting twist is that the unused quadrants can be in distinct power states. For example, a full-width link could modulate down to a half-width link, putting quadrants 0 and 1 into the L1 state, and then later modulate down to a quarter-width link, putting quadrant 2 into the L0s state. In this situation, the link could respond to an immediate need for bandwidth by activating quadrant 2 quickly, while still saving a substantial amount of power.

If more bandwidth is required, the process is slightly more complicated. First, the transmitter will wake up its own circuitry, and also send out a wake-up signal to the receiver. However, because the wake-up is not instantaneous, the transmitter will have to wait for a predetermined and configurable period of time. Once this period has passed, and the receiver is guaranteed to be awake, then the transmitter can finally modulate to a wider link, and start transmitting data at the higher bandwidth.

Most of the previously discussed power saving techniques are highly dynamic and difficult to predict. This means that engineers will naturally have to build in substantial guard-banding to guarantee correct operation. However, CSI also offers deterministic thermal throttling [26]. When a CSI agent reaches a thermal stress point, such as exceeding TDP for a period of time, or exceeding a specific temperature on-die, the overheating agent will send a thermal management request to other agents that it is connected to via CSI. The thermal management request typically includes a specified acknowledgement window and a sleep timer (these could be programmed into the BIOS, or dynamically set by the overheating agent). If the other agent responds affirmatively within the acknowledgement window, then both sides of the link will shut down for the specified sleep time. Using an acknowledgement window ensures that the other agent has the flexibility to finish in-flight transactions before de-activating the CSI link.


Coherency Leaps Forward at Intel


CSI is a switched fabric and a natural fit for cache coherent non-uniform memory architectures (ccNUMA). However, simply recycling Intel’s existing MESI protocol and grafting it onto a ccNUMA system is far from efficient. The MESI protocol complements Intel’s older bus-based architecture and elegantly enforces coherency. But in a ccNUMA system, the MESI protocol would send many redundant messages between different nodes, often with unnecessarily high latency. In particular, when a processor requests a cache line that is stored in multiple locations, every location might respond with the data. However, the requesting processor only needs a single copy of the data, so the system is wasting a bit of bandwidth.

Intel's solution to this issue is rather elegant. They adapted the standard MESI protocol to include an additional state, the Forwarding (F) state, and changed the role of the Shared (S) state. In the MESIF protocol, only a single instance of a cache line may be in the F state and that instance is the only one that may be duplicated [3]. Other caches may hold the data, but it will be in the shared state and cannot be copied. In other words, the cache line in the F state is used to respond to any read requests, while the S state cache lines are now silent. This makes the line in the F state a first amongst equals, when responding to snoop requests. By designating a single cache line to respond to requests, coherency traffic is substantially reduced when multiple copies of the data exist.

When a cache line in the F state is copied, the F state migrates to the newer copy, while the older one drops back to S. This has two advantages over pinning the F state to the original copy of the cache line. First, because the newest copy of the cache line is always in the F state, it is very unlikely that the line in the F state will be evicted from the caches. In essence, this takes advantage of the temporal locality of the request. The second advantage is that if a particular cache line is in high demand due to spatial locality, the bandwidth used to transmit that data will be spread across several nodes.

Figure 4 demonstrates the advantages of MESIF over the traditional MESI protocol, reducing two responses to a single response (acknowledgements are not shown). Note that a peer node is simply a node in the system that contains a cache.



Figure 4 ? MESIF versus MESI Protocol

In general, MESIF is a significant step forward for Intel’s coherency protocol. However, there is at least one optimization which Intel did not pursue ? the Owner state that is used in the MOESI protocol (found in the AMD Opteron). The O state is used to share dirty cache lines (i.e. lines that have been written to, where memory has older or dirty data), without writing back to memory.

Specifically, if a dirty cache line is in the M (modified) state, then another processor can request a copy. The dirty cache line switches to the Owned state, and a duplicate copy is made in the S state. As a result, any cache line in the O state must be written back to memory before it can be evicted, and the S state no longer implies that the cache line is clean. In comparison, a system using MESIF or MESI would change the cache line to the F or S state, copy it to the requesting cache and write the data back to memory ? the O state avoids the write back, saving some bandwidth. It is unclear why Intel avoided using the O state in the newer coherency protocol for CSI ? perhaps the architects decided that the performance gain was too small to justify the additional complexity.

Table 3 summarizes the different protocols and states for the MESI, MOESI and MESIF cache coherency protocols.



Table 3 ? Overview of States in Snoop Protocols


A Two Hop Protocol for Low Latency


In a CSI system, each node is assigned a unique node ID (NID), which serves as an address on the network fabric. Each node also has a Peer Agent list, which enumerates the other nodes in the system that it must snoop when requesting data from memory (typically peers contain a cache, but could also be an I/O hub or device with DMA). Similarly, each transaction is assigned an identifier (TID) for tracking at each involved node. The TID, together with a destination and source ID form a globally unique transaction identifier [37]. The number of TIDs, and hence outstanding transactions is not unlimited, and will likely be one differentiating factor between Xeon DP, Xeon MP and Itanium systems. Table 4 describes the different fields that can be used in each CSI message, although some messages do not use all fields. For example, a snoop response from a processor that holds data in the shared state will not contain any data, just an acknowledgement.



Table 4 ? CSI Message Fields, [1]

CSI was designed as a natural extension of the existing front side bus protocol; although there are some changes, many of the commands can be easily traced to the commands on the front side bus. A set of commands is listed in the ‘250 patent.

In a three hop protocol, such as the one used by AMD’s Opteron, read requests are first sent to the home node (i.e. where the cache line is stored in memory). The home node then snoops all peer nodes (i.e. caching agents) in the system, and reads from memory. Lastly, all snoop responses from peer nodes and the data in memory are sent to the requesting processor. This transaction involves three point to point messages: requestor to home, home to peer and peer to requestor, and a read from memory before the data can be consumed.

Rather than implement a three hop cache coherency protocol, CSI was designed with a novel two hop protocol that achieves lower latency. In the protocol used by CSI, transactions go through three phases; however, data can be used after the second phase or hop. First, the requesting node sends out snoops to all peer nodes (i.e. caches) and the home node. Each peer node sends a snoop response to the requesting node. When the second phase has finished, the requesting node sends an acknowledgement to the home node, where the transaction is finally completed.

In the rare case of a conflict, the home node is notified and will step in and resolve transactions in the appropriate order to ensure correctness. This could force one or more processor in the system to roll back, replay or otherwise cancel the effects of a load instruction. However, the additional control circuitry is neither frequently used, nor is on any critical paths, so it can be tuned for low leakage power.

In the vast majority of transactions, the home node is a silent observer, and the requestor can use the new data as soon as it is received from the peer agent’s cache, which is the lowest possible latency. In particular, a two hop protocol does not have to wait to access memory in the home node, in contrast to three hop protocols. Figure 5 compares the critical paths between two hop and three hop protocols, when data is in a cache (note that not all snoops and responses are shown ? only the critical path).



Figure 5 ? Critical Path Latency for Two and Three Hop Protocols

This arrangement is somewhat unusual in that the requesting processor is conceptually pushing transactions into the system and home node. In three hop protocols, the home node acts as a gate keeper and can defer a transaction if the appropriate queues are full, while only stalling the requestor. In a CSI-based system, the home node receives messages after the transaction in is progress or has already occurred. If these incoming transactions were lost, the system would be unable to maintain coherency. Therefore to ensure correctness, CSI home nodes must have a relatively large pre-allocated buffer to support as many transactions as can be reasonably initiated.


Virtual Channels


One of the most difficult challenges in designing multiprocessor systems is guaranteeing forward progress, and avoiding strange network behavior that limits performance. Unfortunately, this problem is an inherent aspect of multiprocessor system design, and really impacts almost every decision made; there is no clean way to separate it from other concerns. Every coherent transaction across CSI has three phases: the snoop from the requesting CPU, the responses from the peer nodes, and the acknowledgement to the home node. As noted previously, CSI uses separate link layer virtual channels to improve performance and avoid livelocks or deadlocks. Each transaction phase has one or more associated virtual channels: snoop, response and home. These arrangements should come as no surprise, since the EV7 used similar techniques and designations. Additional channels discussed in patents include short message, data and data response channels [34].

One reason for providing different virtual channels is that the traffic characteristics of the three are quite distinct. Packets sent across the home channel are typically very small and must be received in order. In the most common case, a home packet would simply be an acknowledgement that a transaction can retire. The response channel sometimes includes larger packets, often containing actual cache lines (although these may go on the data response channel), and can be processed out of order to improve performance. The snoop channel is mostly smaller packets, and can also operate out of order. The optimizations for each channel are different and by separating each class of traffic, Intel architects can more carefully tune the system for high performance and lower power.

There are also priority relationships between the different classes of traffic. When the system is saturated, the home phase channels will be given the highest priority, which ensures that some transactions will retire, leaving the system and reducing traffic. The next highest priority is the response phase and associated channels, which provide data to processors so they can continue computation, and initiate the home phase. The lowest priority of traffic are the snoop phase channels, which are used to start new transactions and is the first to throttle back.



Dynamic Reconfiguration


One of the problems with the existing bus infrastructure is that the interface presented to software is not particularly clean or isolated. Specifically, components of Intel’s system architecture cannot be dynamically added or removed from the front-side bus; instead the bus and all attached components must be shut down, and then restarted after disabling or adding the component in question. For instance, to remove one faulty processor in a 16 socket server, an entire node (4 processors, one memory controller hub, the local memory and I/O) must be off-lined.

CSI supports both in-band (coordinated by a system component) and out-of-band (coordinated by a service processor) dynamic reconfiguration of system resources, also known as hot plug [39]. A system agent and the firmware work together to quiesce individual components and then modify the routing tables and system addressing decoders, so that the changes appear to be atomic to the operating system and software.

To add a system resource, such as a processor, first the firmware creates a physical and logical profile for the new processor in the rest of the system. Next, the firmware enables the CSI links between the new processor and the rest of the system. The firmware initializes the new processor’s CSI link and sends data about the system configuration to the new processor. The new processor initializes itself and begins self-testing, and will notify the firmware when it is complete. At that point, the firmware notifies the OS and the rest of the system to begin operating with the new processor in place.

System resources, such as a processor, memory or IO hub are removed though a complementary mechanism. These two techniques can also be combined to move resources seamlessly between different system partitions.


Multiprocessor Systems


When the P6 front side bus was first released, it caused a substantial shift in the computer industry by supporting up to four processors without any chipset modifications. As a result, Intel based systems using Linux or Windows penetrated and dominated the workstation and entry level server market, largely because the existing architectures were priced vastly higher.

However, Intel hesitated to extend itself beyond that point. This hesitancy was partially due to economic incentives to maintain the same infrastructure, but also the preferences of key OEMs such as IBM, HP and others, who provide value added in the form of larger multiprocessor systems. Balancing all the different priorities inside of Intel, and pleasing partners is nearly impossible and has handicapped Intel for the past several years. However, it is quite clear that any reservations at Intel disappeared around 2002-3, when CSI development started.

Intel's patents clearly anticipate two and four processor systems, as shown in Figure 6. Each processor in a dual socket system will require a single coherent full width CSI link, with one or two half-width links to connect to I/O bridges, making the system fully symmetric (half-width links are shown as dotted lines). Processors in four socket systems will be fully connected, and each processor could also connect directly to the I/O bridge. More likely, each processor, or pair of processors, could connect to a separate I/O bridge to provide higher I/O bandwidth in the four socket systems.




Figure 6 ? 2 and 4P CSI System Diagrams [2] [34]

Fully interconnected systems, such as those shown in Figure 6 enjoy several advantages over partially connected solutions. First of all, transactions occur at the speed of the slowest participant. Hence, a system where every caching agent (including the I/O bridge) is only one hop away ensures lower transaction latency. Secondly, by lowering transaction latency, the number of transactions in flight is reduced (since the average transaction life time is shorter). This means that the buffers for each caching agent can be smaller, faster and more power efficient. Lastly, operating systems and applications have trouble handling NUMA optimizations, so more symmetrical systems are ideal from a software perspective.



Interacting with I/O


Of course, ensuring optimal communication between multiple processors is just one part of system design. The I/O architecture for Intel’s platform is also important, and CSI brings along several important changes in that area as well [36].

As Figure 6 indicates, some CSI based systems contain multiple I/O hubs, which need to communicate with each other. Since the I/O hubs are not connected, Intel’s engineers devised an efficient method to forward I/O transactions (typically PCI-Express) through CSI. Because CSI was optimized for coherent traffic, it lacks many of the features which PCI-Express relies upon, such as I/O specific packet attributes. To solve this problem, PCI-E packets are tunneled through CSI, leaving much or all of the PCI-E header information intact.


Beyond Multiprocessors


In a forward looking decision by Intel, CSI is fairly agnostic with respect to system expansion. Systems can be expanded in a hierarchical manner, which is the path that IBM took for their older X3 chipset, where one agent in each local cell acts a proxy for the rest of the system. Certainly, the definition of CSI lends itself to hierarchical arrangements, since a “CSI node” is an abstraction and may in fact consist of multiple processors. For instance, in a 16 socket system, there might be four nodes, and each node might contain four sockets and resemble the top diagram in Figure 6. Early Intel patents seem to point to hierarchical expansion as being preferred, although later patents appear to be less restrictive [2] [4]. As an alternative to hierarchical expansion, larger systems can be built using a flat topology (the 2 dimensional torus used by the EV7 would be an example). However, a flat system must have a node ID for each processor, whereas a hierarchical system needs only enough node IDs for the processors in each ‘cell’. So, while a flat 32 socket system would require 32 distinct node IDs, a comparable system using 8 nodes of 4 sockets would only need 4 distinct node IDs.

Most MPU vendors have used node ID ranges to differentiate between versions of their processors. For instance, Intel and AMD both draw clear distinctions between 1, 2 and 4P server MPUs; each one with an increasing level of RAS and more node IDs and a substantial price increase. Furthermore, a flat system with 8+ processors in all likelihood needs snoop filters or directories for scalability. However, Intel’s x86 MPUs will probably not natively support directories or snoop filters; instead leaving that choice to OEMs. This flexibility for CSI systems means that OEMs with sufficient expertise can differentiate their products with custom node controllers for each local node in a hierarchical system.

Directory based coherency protocols are the most scalable option for system expansion. However, directories use a three hop coherency protocol that is quite different from CSI. In the first phase, the requestor sends a request to the home node, which contains the directory that lists which agents have a copy of the cache line. The home node would then snoop those agents, while sending no messages to uninvolved third parties. Lastly, all the agents receiving a snoop would send a response to the requestor. This presents several problems. The directory itself is difficult to implement, since every cache miss in the system generates both a read (the lookup) and a write to the directory (updating ownership). The latency is also higher than a snooping broadcast protocol, although the system bandwidth used is lower, hence providing better scalability. Snoop filters are a more natural extension of the CSI infrastructure suitable for mid-sized systems.

Snoop filters focus on a subset of the key data to reduce the number of snoop responses. The classic example of a snoop filter, such as Intel’s Blackford, Seaburg or Clarksboro chipsets, tracks remotely cached data. Snoop filters have an advantage because they preserve the low latency of the CSI protocol, while a directory would require changing to a three hop protocol. Not every element in the system must have a snoop filter either; CSI is flexible in that regard as well.


Remote Prefetch


Another interesting innovation that may show up in CSI is remote prefetch [9]. Hardware prefetching is nothing new to modern CPUs, it has been around since the 130nm Pentium III. Typically, hardware prefetch works by tracking cache line requests from the CPU and trying to detect a spatial or temporal pattern. For instance, loading a 128MB movie will result in roughly one million sequential requests (temporal) for 128B cache lines that are probably adjacent in memory (spatial). A prefetcher in the cache controller will figure out this pattern, and then start requesting cache accesses ahead of time to hide memory access latencies. However, general purpose systems rely on cache and memory controllers prefetching for the CPU and do not receive feedback from other system agents.

One of the patents relating to CSI is for a remote device initiating a prefetch into processor caches. The general idea is that in some situations, remote agents (an I/O device or coprocessor) might have more knowledge about where data is coming from, than the simple pattern detectors in the cache or memory controller. To take advantage of that, a remote agent sends a prefetch directive message to a cache prefetcher. This message could be as simple as indicating where the prefetch would come from (and therefore where to respond), but in all likelihood would include information such as data size, priority and addressing information. The prefetcher can then respond by initiating a prefetch or simply ignoring the directive altogether. In the former case, the prefetcher would give direct cache access to the remote agent, which then writes the data into the cache. Additionally, the prefetcher could request that the remote agent pre-process the data. For example, if the data is compressed, encoded or encrypted, the remote agent could transform the data to an immediately readable format, or route it over the interprocessor fabric to a decoder or other mechanism.

The most obvious application for remote prefetching is improving I/O performance when receiving data from a high speed Ethernet, FibreChannel or Infiniband interface (the network device would be the remote agent in that case). This would be especially helpful if the transfer unit is large, as is the case for storage protocols such as iSCSI or FibreChannel, since the prefetch would hide latency for most of the data. To see how remote prefetch could improve performance, Figure 7 shows an example using a network device.



Figure 7 ? Remote Prefetch for Network Traffic

On the left is a traditional system, which is accessing 4KB of data over a SAN. It receives a packet of data through the network interface, and then issues a write-invalidate snoop for a cache line to all caching agents in the system. A cache line in memory is allocated, and the data is stored through the I/O hub. This repeats until all 4KB of data has been written into the memory, at which point the I/O device issues an interrupt to the processor. Then, the processor requests the data from memory and snoops all the caching agents; lastly it reads the memory into the cache and begins to use the data.

In a system using remote prefetch, the network adapter begins receiving data and the packet headers indicate that the data payload is 4KB. The network adapter then sends a prefetch directive, through the I/O hub to the processor’s cache, which responds by granting direct cache access. The I/O hub will issue a write-invalidate snoop for each cache line written, but instead of storing to memory, the data is placed directly in the processor’s cache in the modified state. When all the data has been moved, the I/O hub sends an interrupt and the processor begins operating on the data already in the cache. Compared to the previously described method, remote prefetching demonstrates several advantages. First, it eliminates all of the snoop requests by the processor to read the data from memory to the cache. Second, it reduces the load on the main memory system (especially if the processors can stream results back to network adapter) and modestly decreases latency.

While the most obvious application of the patented technique is for network I/O, remote prefetch could work with any part of a system that does caching. For instance, in a multiprocessor system, remote prefetch between different processors, or even coprocessors or acceleration devices is quite feasible. It is unclear whether this feature will be made available to coprocessor vendors and other partners, but it would certainly be beneficial for Intel and a promising sign for a more open platform going forward.


Speculations on CSI Clients


While the technical details of CSI are well documented in various Intel patents, there is relatively little information on future desktop or mobile implementations. These next two sections make a transition from fairly solid technical details into the realm of educated, but ultimately speculative predictions.

CSI will modestly impact the desktop and mobile markets, but may not bring any fundamental changes. Certain Intel patents seem to imply that discrete memory controllers will continue to be used with some MPUs [9]. In all likelihood, Intel will offer several different product variations based on the target market. Some versions will use integrated memory controllers, some will offer an on-package northbridge and some will probably have no system integration at all.

Intel has a brisk chipset business on both the desktop and notebook side that keeps older fabs effectively utilized ? an essential element of Intel’s capital strategy. If Intel were to integrate the northbridge in all MPUs, it would force the company to find other products which can use older fabs, or shutter some of the facilities. Full integration also increases the pin count for each MPU, which increases the production costs. While an integrated memory controller increases performance by reducing latency, many products do not need the extra performance, nor is it always desirable from a marketing perspective.

For technical reasons, an integrated memory controller can also be problematic. Integrated graphics controllers share main memory to reduce cost. As a result, integrated graphics substantially benefits from sharing a die with the memory controller, as it does currently for Intel based systems. However, integrating graphics on the processor seems a little aggressive for a company that has yet to produce an on-die memory controller, and is a waste of cutting edge silicon ? most high performance systems simply do not use integrated graphics.

Intel’s desktop version of Nehalem is code-named Bloomfield and it seems clear that the high performance MPUs, which are targeted at gamers, will feature on-die memory controllers. The performance benefits of reducing memory latency will probably be used as product differentiation by Intel to encourage gamers to move to the Extreme Edition line and justify the higher prices. However, on-die or on-package graphics is unlikely given that most OEMs will use higher performance discrete solutions from NVIDIA or AMD. The width of the CSI connection between the MPU and the chipset may be another differentiating factor. While a half-width link will work for mid-range systems, high-end gaming systems will require more bandwidth. Modern high performance GPUs use a PCI-E x16 slots, which provides 4GB/s in each direction. Hence, it is quite conceivable that by 2009 a pair of high-end GPUs would require ~16GB/s in each direction. Given that gaming systems often stress graphics, network and disk, a full width CSI link may be required to provide enough appropriate performance.

Other desktop parts based on Bloomfield will focus on low cost, and greater integration. It is very likely that these MPUs will be connected via CSI to a second die containing a memory controller and integrated graphics, all packaged inside a single MCM. A CSI link (probably half-width) would connect the northbridge to the rest of the chipset. This solution would let Intel use older fabs to produce the northbridge, and would enable more manufacturing flexibility ? each component could be upgraded individually with fewer dependencies between the two. Intel will probably also produce a MPU with no integrated system features, which will let OEMs use chipsets from 3rd party vendors, such as NVIDIA, VIA and SiS.

Gilo, the mobile proliferation of Nehalem, will face many of the same issues as desktop processors, but also some that are unique to the notebook market. Mobile MPUs do not really need the lower latency; in many situations they sacrifice performance by only populating a single channel of memory, or operating at relatively slow transfer rates. An integrated memory controller would also require a separate voltage plane from the cores, hence systems would need an additional VRM on the motherboard. The clock distribution would also need to be designed so that the cores can vary frequency independently of the memory controller. Consequently, an on-die memory controller is unlikely because of the lack of benefits and additional complexity.

The implementations for Gilo will most likely resemble the mid-range and low-end desktop product configuration. The more integrated products will feature the northbridge and graphics in the same package as the MPU, connected by CSI. A more bare-bones MPU would also be offered for OEMs that prefer higher performance discrete graphics, or wish to use alternative chipsets.

While the system architecture for Intel’s desktop and mobile offerings will change a bit, the effects will probably be more subtle. The majority of Intel MPUs will still require external memory controllers, but they will be integrated on the MPU package itself. This will not fundamentally improve Intel’s performance relative to AMD’s desktop and mobile offerings. However, it will make Intel’s products more attractive to OEMs, since the greater integration will reduce the number of discrete components on system boards and lower the overall cost. In many ways the largest impact will be on the graphics vendors ? since it will make all their solutions (both integrated and discrete) more expensive relative to a single MCM from Intel.


Speculations on CSI Servers


In the server world, CSI will be introduced in tandem with an on-die memory controller. The impact of these two modifications will be quite substantial, as they address the few remaining shortcomings in Intel’s overall server architecture and substantially increase performance. This performance improvement come from two places: the integrated memory controller will lower memory latency, while the improved interconnects for 2-4 socket servers will increase bandwidth and decrease latency.

To Intel, the launch of a broad line of CSI based systems will represent one of the best opportunities to retake server market share from AMD. New systems will use the forthcoming Nehalem microarchitecture, which is a substantially enhanced derivative of the Core microarchitecture, and features simultaneous multithreading and several other enhancements. Historically speaking, new microarchitectures tend to win the performance crown and presage market share shifts. This happened with the Athlon, the Pentium 4, Athlon64/Opteron, and the Core 2 and it seems likely this trend will continue with Nehalem. The system level performance benefits from CSI and integrated memory controllers will also eliminate Intel’s two remaining glass jaws: the older front side bus architecture and higher memory latency.

The single-processor server market is likely where CSI will have the least impact. For these entry level servers, the shared front side bus is not a substantial problem, since there is little communication compared to larger systems. Hence, the technical innovations in CSI will have relatively little impact in this market. AMD also has a much smaller presence in this market, because their advantages (which are similar to the advantages of CSI) are less pronounced. Clearly, AMD will try to make inroads into this market; if the market responds positively to AMD’s solution that may hint at future reactions to CSI.

Currently in the two socket (DP) server market, Intel enjoys a substantial performance lead for commercial workloads, such as web serving or transaction processing. Unfortunately, Intel’s systems are somewhat handicapped because they require FB-DIMMs, which use an extra 5-6 watts per DIMM and cost somewhat more than registered DDR2. This disadvantage has certainly hindered Intel in the last year, especially with customers who require lots of memory or extremely low power systems. While Intel did regain some server market share, AMD’s Opteron is still the clear choice for almost all high performance computing, where the superior system architecture provides more memory and processor communication bandwidth. This advantage has been a boon for AMD, as the HPC market is the fastest growing segment within the overall server market.

Gainestown, the first CSI based Xeon DP, will arrive in the second half of 2008, likely before any of the desktop or mobile parts. In the dual socket market, CSI will certainly be welcome and improve Intel’s line up, featuring 2x or more the bandwidth of the previous generation, but the impact will not be as pronounced as for MP systems. Intel’s dual socket platforms are actually quite competitive because the product cycles are shorter, meaning more frequent upgrades and higher bandwidth. Intel’s current Blackford and Seaburg chipsets, with dual front side buses and snoop filters, offer reasonable bandwidth, although at the cost of slightly elevated power and thermal requirements. This too shall pass; it appears that dual socket systems will shift back to DDR3, eliminating the extra ~5W penalty for each FB-DIMM [12]. This will improve Intel’s product portfolio and put additional pressure on AMD, which is still benefitting from the FB-DIMM thermal issues. The DP server market is currently fairly close to ‘equilibrium’; AMD and Intel have split the market approximately along the traditional 80/20 lines. Consequently, the introduction of CSI systems will enhance Intel’s position, but will not spark massive shifts in market share.

The first Xeon MP to use CSI will debut in the second half of 2009, lagging behind its smaller system counterparts by an entire year. Out of all the x86 product families using CSI, Beckton will have the biggest impact, substantially improving Intel’s position in the four socket server market. Beckton will offer roughly 8-10x the bandwidth of its predecessor, dramatically improving performance. The changes in system architecture will also dramatically reduce latency, which is a key element of performance for most of the target workloads, such as transaction processing, virtualization and other mission critical applications. Since the CSI links are point-to-point, they eliminate one chip and one interconnect crossing, which will cut the latency between processors in half, or better. The integrated memory controller in Beckton will similarly reduce latency, since it also removes out an extra chip and interconnect crossing.

Intel’s platform shortcomings created a weakness that AMD exploited to gain significant market share. It is estimated that Intel currently holds as little as 50% of the market for MP servers, compared to roughly 75-80% of the overall market. When CSI-based MP platforms arrive in 2009, Intel will certainly try to bring their market share back in-line with the overall market. However, Beckton will be competing against AMD’s Sandtiger, a 45nm server product with 8-16 cores also slated for 2009. Given that little is known about the latter, it is difficult to predict the competitive landscape.



Itanium and CSI


CSI will also be used for Tukwila, a quad-core Itanium processor due in 2008. Creating a common infrastructure for Itanium and Xeon based systems has been a goal for Intel since 2003. Because, the economic and technical considerations for these two products are different, they will not be fully compatible. However, the vast majority of the two interconnects will be common between the product lines.

One goal of a common platform for Itanium and Xeon is to share (and therefore better amortize) research, development, design and validation costs, by re-using components across Intel's entire product portfolio. Xeon and Xeon MP products ship in the tens of millions each year, compared to perhaps a million for Itanium. If the same components can be used across all product lines, the non-recurring engineering costs for Itanium will be substantially reduced. Additionally, the inventory and supply chain management for both Intel and its partners will be simplified, since some chipset components will be interchangeable.

Just as importantly, CSI and an integrated memory controller will substantially boost the performance of the Itanium family. Montvale, which will be released at the end of 2007, uses a 667MHz bus that is 128 bits wide ? a total of 10.6GB/s of bandwidth. This pales in comparison to the 300GB/s that a single POWER6 processor can tap into. While bandwidth is only one factor that determines performance, a 30x difference is substantial by any measure. When Tukwila debuts in 2008, it will go a long way towards evening the playing field. Tukwila will offer 120-160GB/s between MPUs (5 CSI lanes at 4.8-6.4GT/s), and multiple integrated FB-DIMM controllers. The combination of doubling the core count, massively increasing bandwidth and reducing latency should prove compelling for Itanium customers and will likely cause a wave of upgrades and migrations similar to the one triggered by the release of Montecito in 2006.


Conclusion


The success of the Pentium Pro and its lineage captured the multi-billion dollar RISC workstation and low-end server market, but that success also created inertia around the bus interface. Politics within the company and with existing partners, OEMs and customers conspired to keep Intel content with the status quo. Unfortunately for Intel, AMD was not content to play second fiddle forever. The Opteron took a portion of the server market, largely by virtue of its superior system architecture and Intel’s simultaneous weakness with the Pentium 4 (Prescott) microarchitecture. While Intel might be prone to internal politics, when an external threat looms large, everything is thrown into high gear. The industry saw that with the RISC versus CISC debate, where Intel P6 engineers hung ads from the now friendly Apple in their cubes for competitive inspiration. The Core microarchitecture, Intel’s current flag bearer, was similarly the labor of a company under intense competitive pressure.

While Intel had multiple internal projects working on a next generation interconnect, the winning design for CSI was the result of collaboration between Intel veterans from Hillsboro, Santa Clara and other sites, as well as the architects who worked on DEC’s Alpha architecture. The EV7, the last Alpha stands out for having the best system interconnect of its time, and certainly influenced the overall direction for CSI. The CSI design team was given a set of difficult, but not impossible goals: design an interconnect family that would span the range of Intel’s performance oriented computational products, from the affordable Celeron to the high-end Xeon MP and Itanium. The results were delayed, largely due to the cancellation of Whitefield, a quad core x86 processor, and the rescheduling and evisceration of Tanglewood nee Tukwila. However, Tukwila and Nehalem will feature CSI when they debut in the next two years, and the world will be able to judge the outcome.

CSI will be a turning point for the industry. In the server world, CSI paired with an integrated memory controller, will erase or reverse Intel’s system architecture deficit to AMD. Intel’s microprocessors will need less cache because of the lower memory and remote access latency; the specs for Tukwila call for 6MB/core rather than the 12MB/core in Montecito. This in turn will free up more die area for additional cores, or more economical die sizes. These changes will put Intel on a more equal footing with AMD, which has had a leg up in system architecture with their integrated memory controller and HyperTransport. As a result, Intel will be in a good position to retake lost market share in the server world in 2008/9 when CSI based systems debut.

In some ways, CSI and integrated memory controllers are the last piece of the puzzle to get Intel’s servers back on track. The new Core microarchitecture has certainly proven to be a capable design, even when paired with the front side bus and a discrete memory controller. The multithreaded microarchitecture for Nehalem, coupled with an integrated memory controller and the CSI system fabric should be an even more impressive product. For Intel, 2008 will be a year to look forward to, thanks in no small part to the engineers who worked on CSI.


References


[1] Batson, B. et al. Messaging Protocol. US Patent Application 20050262250A1. November 24, 2005.
[2] Batson, B. et al. Cache Coherence Protocol. US Patent Application 20050240734A1. October 27, 2005.
[3] Hum, H. et al. Forward State for use in Cache Coherency in a Multiprocessor System. US Patent No. 6,922,756 B2. July 26, 2005.
[4] Hum, H. et al. Hierarchical Virtual Model of a Cache Hierarchy in a Multiprocessor System. US Patent Application 20040123045A1. June 24, 2004.
[5] Beers, R. et al. Non-Speculative Distributed Conflict Resolution for a Cache Coherency Protocol. US Patent No. 6,954,829 B2. October 11, 2005.
[6] Hum, H. et al. Speculative Distributed Conflict Resolution for a Cache Coherency Protocol. US Patent Application 20040122966A1. June 24, 2004.
[7] Hum, H. et al. Hierarchical Directories for Cache Coherency in a Multiprocessor System. US Patent Application 20060253657A1. November 9, 2006.
[8] Cen, Ling. Method, System, and Apparatus for a Credit Based Flow Control in a Computer System. US Patent Application 20050088967A1. April 28, 2005.
[9] Huggahalli, R. et al. Method and Apparatus for Initiating CPU Data Prefetches by an External Agent. US Patent Application 20060085602A1. April 20, 2006.
[10] Kanter, David. Intel’s Tukwila Confirmed to be Quad Core. Real World Technologies. May 5, 2006. http://www.realworldtech.com/page.cfm?NewsID=361&date=05-05-2006#361
[11] Rust, Adamson. Intel’s Stoutland to have Integrated Memory Controller. The Inquirer. February 1, 2007. http://www.theinquirer.net/default.aspx?article=37373
[12] Intel Thurley has Early CSI Interconnect. The Inquirer. February 2, 2007. http://www.theinquirer.net/default.aspx?article=37392
[13] Cherukuri, N. et al. Method and Apparatus for Periodically Retraining a Serial Links Interface. US Patent No. 7,209,907 B2. April 24, 2007.
[14] Cherukuri, N. et al. Method and Apparatus for Interactively Training Links in a Lockstep Fashion. US Patent Application 20050262184A1. November 24, 2005.
[15] Cherukuri, N. et al. Method and Apparatus for Acknowledgement-based Handshake Mechanism for Interactively Training Links. US Patent Application 20050262280A1. November 24, 2005.
[16] Cherukuri, N. et al. Method and Apparatus for Detecting Clock Failure and Establishing an Alternate Clock Lane. US Patent Application 20050261203A1. December 22, 2005.
[17] Cherukuri, N. et al. Method for Identifying Bad Lanes and Exchanging Width Capabilities of Two CSI Agents Connected Across a Link. US Patent Application 20050262284A1. November 24, 2005.
[18] Frodsham, T. et al. Method, System and Apparatus for Loopback Entry and Exit. US Patent Application 20060020861A1. January 26, 2006.
[19] Cherukuri, N. et al. Methods and Apparatuses for Resetting the Phyiscal Layers of Two Agents Interconnected Through a Link-Based Interconnection. US Patent No. 7,219,220 B2. May 15, 2007.
[20] Mannava, P. et al. Method and System for Flexible and Negotiable Exchange of Link Layer Functional Parameters. US Patent Application 20070088863A1. April 19, 2007.
[21] Spink, A. et al. Interleaving Data Packets in a Packet-based Communication System. US Patent Application 20070047584A1. March 1, 2007.
[22] Chou, et al. Link Level Retry Scheme. US Patent No. 7,016,304 B2. March 21, 2006.
[23] Creta, et al. Separating Transactions into Different Virtual Channels. US Patent No. 7,165,131 B2. January 16, 2007.
[24] Cherukuri, N. et al. Technique for Lane Virtualization. US Patent Application 20050259599A1. November 24, 2005.
[25] Steinman, M. et al. Methods and Apparatuses to Effect a Variable-width Link. US Patent Application 20050259696A1. November 24, 2005.
[26] Kwa et al. Method and System for Deterministic Throttling for Thermal Management. US Patent Application 20060294179A1. December 28, 2006.
[27] Cherukuri, N. et al. Dynamically Modulating Link Width. US Patent Application 20060034295A1. February 16, 2006.
[28] Cherukuri, N. et al. Link Power Saving State. US Patent Application 20050262368A1. November 24, 2005.
[29] Lee, V. et all. Retraining Derived Clock Receivers. US Patent Application 20050022100A1. January 27, 2005.
[30] Fan, Yongping. Matched Current Delay Cell and Delay Locked Loop. US Patent No. 7,202,715 B1. April 10, 2007.
[31] Ayyar, M. et al. Method and Apparatus for System Level Initialization. US Patent Application 20060126656A1. June 15, 2006.
[32] Frodsham, T. et al. Method, System and Apparatus for Link Latency Management. US Patent Application 20060168379A1. July 27, 2006.
[33] Frodsham, T. et al. Technique to Create Link Determinism. US Patent Application 20060020843A1. January 26, 2006.
[34] Spink. A. et al. Buffering Data Packets According to Multiple Flow Control Schemes. US Patent Application 20070053350A1. March 8, 2007.
[35] Cen, Ling. Arrangements Facilitating Ordered Transactions. US Patent Application 20040008677A1. January 15, 2004.
[36] Creta, et al. Transmitting Peer-to-Peer Transactions Through a Coherent Interface. US Patent No. 7,210,000 B2. April 24, 2007.
[37] Hum, H. et al. Globally Unique Transaction Identifiers. US Patent Application 20050251599A1. November 10, 2005.
[38] Cen, Ling. Two-hop Cache Coherency Protocol. US Patent Application 20070022252A1. January 25, 2007.
[39] Ayyar, M. et al. Method and Apparatus for Dynamic Reconfiguration of Resources. US Patent Application 20060184480A1. August 17, 2006.

Sep 27, 2007

๋ฐ์ดํ„ฐ ํ†ต์‹  ๊ฐ•์˜

์ œ1์žฅ ์ •๋ณด์™€ ์ •๋ณดํ†ต์‹ 
์ œ2์žฅ ์ •๋ณดํ†ต์‹ ์˜ ์ดํ•ด๋ฅผ ์œ„ํ•œ ๊ฐœ๋…๋“ค
์ œ3์žฅ ๋ชจ๋Ž€๊ณผ ๋””์ง€ํ„ธ ์ „์†ก๊ธฐ์ˆ 
์ œ4์žฅ ์ ‘์†๊ทœ๊ฒฉ๊ณผ ํ๋ฆ„์ œ์–ด ์—๋Ÿฌ์ œ์–ด ๋ฐฉ๋ฒ•
์ œ5์žฅ ํ”„๋กœํ† ์ฝœ์ด๋ž€ ๋ฌด์—‡์ธ๊ฐ€?
์ œ6์žฅ ๋‹ค์ค‘ํ™”์™€ ๋ฐ์ดํƒ€ ์••์ถ•?
์ œ7์žฅ ๊ตํ™˜๊ธฐ์ˆ 
์ œ8์žฅ ์ œ1์žฅ~์ œ7์žฅ ์š”์•ฝ
์ œ9์žฅ ๋ง๊ตฌ์กฐ์™€ ๊ฐœ๋ฐฉํ˜• ํ†ต์‹ ๋ง(OSI)
์ œ10์žฅ ํŒจํ‚ทํ†ต์‹ ์˜ ์ดํ•ด
์ œ11์žฅ B-ISDN(๊ด‘๋Œ€์—ญ ์ •๋ณด์ข…ํ•ฉํ†ต์‹ ๋ง)์˜ ์ดํ•ด
์ œ12์žฅ ๊ทผ๊ฑฐ๋ฆฌ ํ†ต์‹ ๋ง
์ œ13์žฅ ๋ฌด์„  ๋ฐ์ดํƒ€ ํ†ต์‹ 
์ œ14์žฅ ์œ„์„ฑ ํ†ต์‹ 
์ œ15์žฅ ๊ด‘ํ†ต์‹ 
์ œ16์žฅ ์ œ9~15์žฅ ์š”์•ฝ
์ œ17์žฅ ์ •๋ณดํ†ต์‹  ๋ณด์•ˆ(Security)
์ œ18์žฅ ์ธํ„ฐ๋„ท(Internet)
์ œ19์žฅ EDI(Electronic Data Interchange)
์ œ20์žฅ ๋ฒ”์ง€๊ตฌ์ธก์œ„ ์‹œ์Šคํ…œ(GPS: Global Positioning System)๊ณผ ๊ทธ ํ™œ์šฉ
์ œ21์žฅ ๋„คํŠธ์›Œํฌ ๊ด€๋ฆฌ
์ œ22์žฅ ๋„คํŠธ์›Œํฌ ์„ ์ •
์ œ23์žฅ ์ •๋ณดํ†ต์‹  ๊ธฐ์ˆ ์˜ ๋ฏธ๋ž˜

๋ชจ๋Ž€๊ณผ ๋””์ง€ํ„ธ ์ „์†ก๊ธฐ์ˆ 

์ •๋ณดํ†ต์‹ ์—์„œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ฒƒ์€ ์ •๋ณด๋ฅผ ๋‹ด๊ณ  ์žˆ๋Š” ์ „๊ธฐ ์‹ ํ˜ธ๋ฅผ ์›๊ฒฉ์ง€๊นŒ์ง€ ์ž˜ ๋ณด๋‚ด๋Š” ์ผ์ด๋‹ค.
์งง๊ฒŒ๋Š” ์ˆ˜ ๋ฐฑ m์—์„œ ๊ธธ๊ฒŒ๋Š” ์ˆ˜ ๋งŒ Km์— ์ด๋ฅด๋Š” ํšŒ์„ ์„ ๋”ฐ๋ผ ์‹ ํ˜ธ๊ฐ€ ์ •ํ™•ํžˆ ์†ก์ˆ˜์‹  ๋˜์–ด์•ผ์ง€๋งŒ ๊ฒฐ๊ตญ ์ •ํ™•ํ•œ ์ •๋ณด ์ „์†ก์ด ์ด๋ฃจ์–ด์ง€๋Š” ์…ˆ์ด๊ณ  ์ด๋ฅผ ๋‹ด๋‹นํ•˜๋Š” ๊ฒƒ์ด ์ „์†ก ๊ธฐ์ˆ ์ด๋‹ค.
์‚ฌ์‹ค ์ •๋ณด ํ†ต์‹ ์˜ ๋งŽ์€ ๋ถ€๋ถ„์ด ์ „์†ก ๊ธฐ์ˆ ๊ณผ ์—ฐ๊ด€๋˜์ง€๋งŒ ์ด๋ฒˆ ์žฅ์—์„œ๋Š” ๊ธฐ๋ณธ์ด ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ํ•ญ๋ชฉ๋“ค์„ ์†Œ๊ฐœํ•˜๊ณ ์ž ํ•œ๋‹ค. ์—ฌ๋Ÿฌ๋ถ„์€ ์ด๋ฒˆ 3์žฅ์—์„œ ์•„๋‚ ๋กœ๊ทธ ์ „์†ก๊ณผ ๋””์ง€ํ„ธ ์ „์†ก์€ ๋ฌด์—‡์ด๋ฉฐ ๋””์ง€ํ„ธ ์ „์†ก์ด ์•„๋‚ ๋กœ๊ทธ ์ „์†ก์— ๋น„ํ•ด ์–ด๋–ป๊ฒŒ ์œ ๋ฆฌํ•œ๊ฐ€ ๋ชจ๋Ž€์€ ์–ด๋–ค ์ผ์„ ํ•˜๋ฉฐ ์–ด๋–ป๊ฒŒ 1๊ณผ 0์œผ๋กœ ๊ตฌ์„ฑ๋œ ์ •๋ณด๋ฅผ ์ „์†กํ•˜๋Š”๊ฐ€ ๊ทธ๋ฆฌ๊ณ , ๋ฌธ์ž์˜ ์ •ํ™•ํ•œ ์†ก์ˆ˜์‹ ์„ ์œ„ํ•ด์„œ๋Š” ์–ด๋– ํ•œ ์ „์†ก ๋ฐฉ๋ฒ•์ด ์‚ฌ์šฉ๋˜๋Š”๊ฐ€ ๋“ฑ์— ๋Œ€ํ•ด์„œ ์ดํ•ดํ•˜๊ธฐ ๋ฐ”๋ž€๋‹ค.


์ •ํ™•ํ•œ ์ •๋ณด์ „์†ก์— ๊ด€๋ จ๋œ ๊ธฐ์ˆ ์— ๊ด€ํ•œ ๊ฒƒ์ด๋‹ค. ์‚ฌ์šฉ์ž์— ์˜ํ•ด์„œ ์ƒ์„ฑ๋œ ์ •๋ณด๋Š” ํ†ต์‹ ํ•˜๊ธฐ ์ ๋‹นํ•œ ํ˜•ํƒœ์˜ ์‹ ํ˜ธ๋กœ์„œ ๋ณ€ํ™˜๋œ ๋‹ค์Œ, ์—ฌ๋Ÿฌ ์ „์†ก๊ธฐ์ˆ ์— ์˜ํ•ด ํ†ต์‹ ๋กœ ์ƒ์— ์ „๊ธฐ์  ์‹ ํ˜ธํ˜•ํƒœ๋กœ ์‹ค๋ ค์„œ ์ƒ๋Œ€์—๊ฒŒ ์šด์†ก๋˜๋Š” ๊ฒƒ์ด๋‹ค.
์ด ๋•Œ ์ •๋ณด๋ฅผ ์ „๋‹ฌํ•˜๋Š”๋ฐ ์žˆ์–ด์„œ ๋ฌด์—‡๋ณด๋‹ค ์ค‘์š”ํ•œ ๊ฒƒ์€ ์†ก์‹ ํ•œ ์ •๋ณด๊ฐ€ ์ƒ๋Œ€์—๊ฒŒ ์˜ฌ๋ฐ”๋กœ ์ •ํ™•ํ•˜๊ฒŒ ์ˆ˜์‹ ๋˜์–ด์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค.
๊ฐ‘์ด๋ผ๋Š” ์‚ฌ๋žŒ์ด ๋ณด๋‚ธ ์ •๋ณด๋Š” '์ •๋ณด์‹œ๋Œ€์˜ ์ฃผ์—ญ'์ด์—ˆ๋Š”๋ฐ ์„์ด ๋ฐ›์€ ์ •๋ณด๋Š” '์ •๋ณด์‹œ๋Œ€์˜ ์กฐ์—ญ'์ด๋‚˜ ์•„๋‹ˆ๋ฉด ์ „ํ˜€ ๋œป ๋ชจ๋ฅผ ์ •๋ณด์˜€๋‹ค๋ฉด, ๊ฐ‘๊ณผ ์„๊ฐ„์˜ ์ •๋ณดํ†ต์‹ ์€ ๋ฌด์˜๋ฏธํ•œ ๊ฒƒ์ด ๋˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค.
๋”ฐ๋ผ์„œ ์ •๋ณดํ†ต์‹ ์˜ '์ œ1์˜ ์›์น™'์€ ์ •๋ณด์˜ ์ •ํ™•ํ•œ ์ „๋‹ฌ์— ์žˆ๋‹ค. ๋ฐ”๋กœ ์ด ์ œ1์˜ ์›์น™์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์กด์žฌํ•˜๋Š” ๊ฒƒ์ด ๋‹ค์–‘ํ•œ ํ˜•ํƒœ์˜ ์ „์†ก๊ธฐ์ˆ ์ด๋‹ค.
์ด๋ฒˆ์—๋Š” ๋จผ์ € ์ „์†ก๋งค์ฒด๋ฅผ ํ†ตํ•ด ์ •๋ณด๋ฅผ ์ž˜ ์ „๋‹ฌํ•˜๊ธฐ ์œ„ํ•ด ์ตœ์šฐ์„ ์œผ๋กœ ์š”๊ตฌ๋˜๋Š” ์ •๋ณด์˜ ์‹ ํ˜ธ์ „์†กํ˜•ํƒœ, ์ฆ‰ ์ •๋ณด์˜ ์‹ ํ˜ธ๋ณ€ํ™˜์— ๋Œ€ํ•ด์„œ ์„ค๋ช…ํ•œ๋‹ค.
ํŠนํžˆ ์˜ค๋Š˜๋‚  ๊ฐ€์žฅ ๋งŽ์ด ์“ฐ์ด๊ณ  ์žˆ๋Š” ์ „์†กํ˜•ํƒœ์ธ ๋””์ง€ํ„ธ ์ •๋ณด์˜ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ ์ „์†ก์„ ์œ„ํ•œ ๋ชจ๋Ž€๊ธฐ์ˆ ์„ ์ง‘์ค‘์ ์œผ๋กœ ์•Œ์•„๋ณด๊ณ ์ž ํ•œ๋‹ค.
๋˜ํ•œ ๋‹ค์–‘ํ•œ ์ •๋ณด์˜ ์ „์†กํ˜•ํƒœ ์ฆ‰ ์ง๋ณ‘๋ ฌ ์ „์†ก๋ฐฉ์‹๊ณผ ๋™๊ธฐ์‹ ๋ฐ ๋น„๋™๊ธฐ์‹ ์ „์†ก๋ฐฉ์‹์— ๋Œ€ํ•ด์„œ๋„ ์–ธ๊ธ‰ํ•œ๋‹ค.
๋‹ค์Œ ๊ฐ•์˜์—์„œ๋Š” ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ ์žฅ๋น„๋ฅผ ์ „์†ก๋งค์ฒด์— ์ ‘์†ํ•˜๊ธฐ ์œ„ํ•œ ์ ‘์†๊ธฐ์ˆ , ํŠนํžˆ ํ˜„์žฌ ๊ฐ€์žฅ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋Š” RS-232C๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ์‚ดํŽด๋ณธ๋‹ค.
๋˜ ์†ก์‹ ์ธก์—์„œ๋ถ€ํ„ฐ ์ˆ˜์‹ ์ธก๊นŒ์ง€ ์ •๋ณด๋ฅผ ์ „๋‹ฌํ•˜๋Š” ์ •๋ณด์ „๋‹ฌํ†ต๋กœ์˜ ๊ตฌ์„ฑ ๋ฐฉ์‹๋„ ๋‹ค๋ฃจ๋ฉฐ, ์ •๋ณดํ†ต์‹ ์˜ ์ œ1์›์น™์„ ๊ฐ€์žฅ ๋ช…์‹œ์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•˜๋Š” ์—๋Ÿฌ์ œ์–ด๋ฐฉ์‹๊ณผ ํ๋ฆ„์ œ์–ด๋ฐฉ์‹์— ๋Œ€ํ•ด์„œ๋„ ์„ค๋ช…ํ•œ๋‹ค.

์šฐ๋ฆฌ๊ฐ€ ํ‘œํ˜„ํ•˜๊ณ ์ž ํ•˜๋Š” ์ •๋ณด์˜ ํ˜•ํƒœ๋Š” ์•„๋‚ ๋กœ๊ทธ ์ •๋ณด์™€ ๋””์ง€ํ„ธ์ •๋ณด์˜ ๋‘ ๊ฐ€์ง€ ์–‘์ƒ์„ ๋ ๊ณ  ์žˆ์Œ์€ ์ด๋ฏธ ์ง€๋‚œ๋ฒˆ์— ์–ธ๊ธ‰ํ•œ ๋ฐ” ์žˆ๋‹ค.
์ด๋Ÿฌํ•œ ๋‘ ๊ฐ€์ง€ ํ˜•ํƒœ์˜ ์ •๋ณด๋“ค์€ ์ „์†ก๋งค์ฒด๋ฅผ ํ†ตํ•ด ์ „๋‹ฌ๋  ๋•Œ ์ „์žํŒŒ์˜ ํ˜•ํƒœ, ์ฆ‰ ์‹ ํ˜ธํ˜•ํƒœ๋กœ์„œ ์ „์†ก๋˜๋Š”๋ฐ, ์ด๋•Œ ์‹ ํ˜ธ๊ฐ€ ๋””์ง€ํ„ธ ํ˜•ํƒœ๋กœ ์ „์†ก๋˜๋Š” ๊ฒƒ์„ '๋””์ง€ํ„ธ ์‹ ํ˜ธ์ „์†ก', ์•„๋‚ ๋กœ๊ทธ ํ˜•ํƒœ๋กœ ์ „์†ก๋˜๋Š” ๊ฒƒ์„ '์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ์ „์†ก'์ด๋ผ ํ•œ๋‹ค.
์ฆ‰ ์ •๋ณด์˜ ํ˜•ํƒœ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ „์†ก์˜ ํ˜•ํƒœ ์—ญ์‹œ ๋””์ง€ํ„ธ๊ณผ ์•„๋‚ ๋กœ๊ทธ, ๋‘ ๊ฐ€์ง€๋กœ ๊ตฌ๋ถ„๋  ์ˆ˜ ์žˆ๋‹ค.
๋”ฐ๋ผ์„œ '์ •๋ณด์ „์†ก'์ด๋ผ ํ•จ์€ ํ‘œํ˜„ ์ •๋ณด์™€ ์ „์†ก์‹ ํ˜ธ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๋‹ค๋ฃจ๋Š” ๊ฒƒ์ด๋ผ ํ•  ์ˆ˜ ์žˆ๋‹ค.
์ •๋ณด์ „์†ก์˜ ํ˜•ํƒœ๋Š” ์•„๋‚ ๋กœ๊ทธ ์ •๋ณด๋ฅผ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ์™€ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ ๊ฐ๊ฐ ์ „์†กํ•˜๋Š” ๊ฒฝ์šฐ์™€ ๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ์™€ ๋””์ง€ํ„ธ ์‹ ํ˜ธ ๊ฐ๊ฐ์œผ๋กœ ์ „์†กํ•˜๋Š” ๊ฒฝ์šฐ ๋“ฑ ๋ชจ๋‘ 4๊ฐ€์ง€ ์กฐํ•ฉ์ด ๊ฐ€๋Šฅํ•˜๊ฒŒ ๋œ๋‹ค.(ํ‘œ 3.1 ์ฐธ์กฐ)

์šฐ๋ฆฌ๊ฐ€ ํ˜„์žฌ ๊ฐ€์žฅ ๋งŽ์ด ์ด์šฉํ•˜๊ณ  ์žˆ๋Š” ์ •๋ณดํ†ต์‹ ์˜ ํ˜•ํƒœ๋Š” ์ผ๋ฐ˜ ๊ณต์ค‘์ „ํ™”๋ง, ์ฆ‰ ์ „ํ™”์„ ์— ์ปดํ“จํ„ฐ๋ฅผ ์—ฐ๊ฒฐํ•˜์—ฌ ์ •๋ณด๋ฅผ ์ฃผ๊ณ ๋ฐ›๋Š” ๊ฒƒ์ด๋‹ค. ๊ทธ๋Ÿฐ๋ฐ ๊ณต์ค‘์ „ํ™”๋ง์€ ์›๋ž˜ ์Œ์„ฑ ์ •๋ณด๋ฅผ ๋ณด๋‚ด๊ธฐ ์œ„ํ•ด ๊ตฌ์„ฑ๋œ ํ†ต์‹ ๋กœ๋กœ์„œ, ์ฃผ๋กœ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋ฅผ ์‹ค์–ด ๋‚˜๋ฅด๋Š”๋ฐ ์ ํ•ฉํ•œ ํŠน์„ฑ์„ ๊ฐ–๊ณ  ์žˆ๋‹ค.
๋”ฐ๋ผ์„œ ์ „ํ™”๋ง์„ ํ†ตํ•ด์„œ ์ •๋ณดํ†ต์‹ ์„ ํ•˜๋ ค๋ฉด ์ž์‹ ์˜ ์ •๋ณดํ˜•ํƒœ๋ฅผ ์ „ํ™”๋ง์ด ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์‹ ํ˜ธํ˜•ํƒœ, ์ฆ‰ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ์•ผ ํ•œ๋‹ค. ๋ฐ”๋กœ ์ด๋Ÿฌํ•œ ์—ญํ• , ์ฆ‰ ๋””์ง€ํ„ธ ์ •๋ณด์˜ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ ํ˜•ํƒœ๋กœ์˜ ๋ณ€ํ™˜์„ ๋‹ด๋‹นํ•˜๋Š” ๊ฒƒ์ด ๋ชจ๋Ž€(modem)์ด๋‹ค.
๋ชจ๋Ž€์ด๋ž€ ์›๋ž˜ Modulator ์™€ Demodulator ์—์„œ ํ•ฉ์„ฑ๋œ ์šฉ์–ด๋กœ์„œ . ๊ทธ๋ฆผ 3-1์—์„œ์™€ ๊ฐ™์ด ๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋ณ€์กฐ ๊ธฐ๋Šฅ๊ณผ ์ด์™€๋Š” ๋ฐ˜๋Œ€๋กœ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ๋ถ€ํ„ฐ ๋‹ค์‹œ ์›๋ž˜์˜ ๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•ด ๋‚ด๋Š” ๋ณต์กฐ ๊ธฐ๋Šฅ์„ ๊ฐ–๋Š” ์ผ์ข…์˜ ์‹ ํ˜ธ ๋ณ€ํ™˜๊ธฐ์ด๋‹ค.(๊ทธ๋ฆผ 3-1 ์ฐธ์กฐ)
์ผ๋ฐ˜์ ์œผ๋กœ ๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋ณ€์กฐ๋ฐฉ๋ฒ•์—๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์ง„ํญํŽธ์ด ๋ณ€์กฐ, ์ฃผํŒŒ์ˆ˜ํŽธ์ด ๋ณ€์กฐ, ์œ„์ƒํŽธ์ด ๋ณ€์กฐ ๋“ฑ 3๊ฐ€์ง€๊ฐ€ ์žˆ๋‹ค.
๋ฌผ๋ก  ์ด๋“ค์„ ๊ฒฐํ•ฉํ•œ ๋ณ€์กฐ๋ฐฉ๋ฒ•์ด ์‚ฌ์šฉ๋˜๊ธฐ๋„ ํ•œ๋‹ค. ์ง„ํญํŽธ์ด๋ณ€์กฐ๋Š” '0'๊ณผ '1'์„ ๋‘ ๊ฐ€์ง€ ๊ฐ๊ฐ ์„œ๋กœ ๋‹ค๋ฅธ ์ง„ํญ์„ ๊ฐ–๋Š” ๋ฐ˜์†กํŒŒ๋กœ์„œ ํ‘œํ˜„ํ•˜๋Š” ๋ณ€์กฐ๋ฐฉ์‹์œผ๋กœ ๋น„ํšจ์œจ์ ์ด๊ธฐ ๋•Œ๋ฌธ์— ๋‹จ๋…์œผ๋กœ ์ด์šฉ๋˜๊ธฐ๋ณด๋‹ค๋Š” ์œ„์ƒํŽธ์ด๋ณ€์กฐ๋ฐฉ์‹๊ณผ ํ˜ผํ•ฉ๋˜์–ด ์ด์šฉํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค.

์ฃผํŒŒ์ˆ˜ํŽธ์ด ๋ณ€์กฐ๋Š” ๋ฐ˜์†กํŒŒ ์ฃผํŒŒ์ˆ˜ ๋ถ€๊ทผ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ์ฃผํŒŒ์ˆ˜๋ฅผ ๊ฐ–๋Š” ์ •ํ˜„ํŒŒ๋กœ์„œ 2์ง„๊ฐ’ '0'๊ณผ '1'์„ ํ‘œํ˜„ํ•˜๋Š” ๋ณ€์กฐ๋ฐฉ์‹์ด๋‹ค.
์ผ๋ก€๋กœ 1800bps ๋ฐ˜์ด์ค‘ ๋ณ€์กฐ๋ฐฉ์‹์„ ๊ทœ์ •ํ•˜๊ณ  ์žˆ๋Š” Bell 202์‹œ๋ฆฌ์ฆˆ๋Š” '1'์˜ 2์ง„๊ฐ’์— 2200Hz์˜ ์ฃผํŒŒ์ˆ˜๋ฅผ , '0'์—๋Š” 1200Hz์˜ ์ฃผํŒŒ์ˆ˜๋ฅผ ํ• ๋‹นํ•˜๊ณ  ์žˆ๋‹ค.
์œ„์ƒํŽธ์ด๋ณ€์กฐ๋Š” ๋ฐ˜์†กํŒŒ๋กœ ์‚ฌ์šฉํ•˜๋Š” ์ •ํ˜„ํŒŒ์˜ ์œ„์ƒ์— ์ •๋ณด๋ฅผ ์‹ฃ๋Š” ๋ณ€์กฐ๋ฐฉ์‹์œผ๋กœ, ์ผ์ • ์ฃผํŒŒ์ˆ˜, ์ผ์ • ์ง„ํญ์˜ ์ •ํ˜„ํŒŒ์˜ ์œ„์ƒ์— ์ •๋ณด๋ฅผ ์‹ฃ๋Š” ๋ณ€์กฐ๋ฐฉ์‹์œผ๋กœ, ์ผ์ • ์ฃผํŒŒ์ˆ˜, ์ผ์ • ์ง„ํญ์˜ ์ •ํ˜„ํŒŒ์˜ ์œ„์ƒ์„ 2๋“ฑ๋ถ„ 4๋“ฑ๋ถ„ ๋“ฑ์œผ๋กœ ๋‚˜๋ˆ„์–ด ๊ฐ๊ฐ ๋‹ค๋ฅธ ์œ„์ƒ์— '1'ํ˜น์€ '0'์„ ํ• ๋‹นํ•˜๊ฑฐ๋‚˜ 2๋น„ํŠธ 3๋น„ํŠธ๋ฅผ ํ•œ๊บผ๋ฒˆ์— ํ• ๋‹นํ•˜์—ฌ ์ƒ๋Œ€์ธก์— ๋ณด๋‚ด๋Š” ๋ฐฉ์‹์ด๋‹ค.
์ด์ƒ 3๊ฐ€์ง€ ๋ณ€์กฐ๋ฐฉ์‹ ์ด์™ธ์—๋„ ์ด๋“ค์„ ์„œ๋กœ ํ˜ผํ•ฉํ•˜์—ฌ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ๋‹ค. ์ผ๋ก€๋กœ ์ง„ํญํŽธ์ด ๋ณ€์กฐ์™€ ์œ„์ƒํŽธ์ด ๋ณ€์กฐ๋ฐฉ์‹์„ ํ˜ผํ•ฉํ•จ์œผ๋กœ์จ ์Œ์„ฑ๊ธ‰์—์„œ ํ˜„์žฌ 56Kbps๊นŒ์ง€์˜ ๋†’์€ ์ „์†ก์†๋„๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋ฆผ3-2๋Š” ๋ฐ˜์†กํŒŒ์˜ ๋ชจ์–‘๊ณผ ์ง„ํญ ๋ณ€์กฐ, ์ฃผํŒŒ์ˆ˜ ๋ณ€์กฐ, ์œ„์ƒ ๋ณ€์กฐ์˜ ๋ชจ์–‘์„ ๋ณด์˜€๋‹ค.

๋ฐ˜์†กํŒŒ์˜ ๋ชจ์–‘๊ณผ 3๊ฐ€์ง€ ๋ณ€์กฐ๋ฐฉ๋ฒ•

ํ•œํŽธ ๋ชจ๋Ž€์€ ๋ณด๋Š” ๊ด€์ ์— ๋”ฐ๋ผ ์—ฌ๋Ÿฌ ๊ฐ€์ง€๋กœ ๋ถ„๋ฅ˜๋œ๋‹ค. ์šฐ์„  ์‚ฌ์šฉ ํ˜•ํƒœ์— ๋”ฐ๋ผ ์‹œ์Šคํ…œ ๋‚ด์— ์กฐ๋ฆฝ๋˜์–ด ์‚ฌ์šฉ๋˜๋Š” ๋‚ด์žฅํ˜• ๋ชจ๋Ž€, ์ž์ฒด๋งŒ์œผ๋กœ ํ•˜๋‚˜์˜ ์žฅ๋น„๋ฅผ ์ด๋ฃจ๋Š” ์™ธ์žฅํ˜•์œผ๋กœ ๋ถ„๋ฅ˜ํ•  ์ˆ˜ ์žˆ๋‹ค.
์šฐ๋ฆฌ๊ฐ€ PCํ†ต์‹ ์— ์ ‘์†ํ•˜๋Š” ๊ฒฝ์šฐ์— ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋Ž€์€ ๋‚ด์žฅํ˜•์ด ๋Œ€๋ถ€๋ถ„์ด๋‹ค. ๋˜ํ•œ ๋ฒ”์šฉ ๋ชจ๋Ž€์€ ๋ฐ์ดํ„ฐํ†ต์‹ ์†๋„์— ๋”ฐ๋ผ ์ €์†, ์ค‘์†, ๊ณ ์† ๋ชจ๋Ž€์œผ๋กœ ๋ถ„๋ฅ˜๋˜๊ธฐ๋„ ํ•œ๋‹ค.
์ด๋Ÿฌํ•œ ๋ถ„๋ฅ˜ ์ด์™ธ์—๋„ ๋ชจ๋Ž€์€ ์‚ฌ์šฉํ•˜๋Š” ์ฑ„๋„์˜ ๋Œ€์—ญํญ์ด๋‚˜ ์‚ฌ์šฉ ๊ฐ€๋Šฅ ๊ฑฐ๋ฆฌ, ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ํฌํŠธ์ˆ˜์— ๋”ฐ๋ผ ๋ถ„๋ฅ˜๋˜๊ธฐ๋„ ํ•œ๋‹ค.
๋ชจ๋Ž€์€ ์•ž์—์„œ๋„ ์„ค๋ช…ํ•œ ๋ฐ”์™€ ๊ฐ™์ด ๊ณต์ค‘์ „ํ™”๋ง์„ ํ†ตํ•œ ์ •๋ณดํ†ต์‹ ์— ์žˆ์–ด์„œ ํ•„์ˆ˜์ ์ธ ํ†ต์‹ ์žฅ๋น„๋กœ์„œ, ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘ ๊ฐ€์ง€ ํ‘œ์ค€์— ์˜ํ•ด์„œ ๋งŒ๋“ค์–ด์ง€๊ณ  ์žˆ๋‹ค.
๊ทธ ํ•˜๋‚˜๋Š” ๋ฏธ๊ตญ์„ ์ค‘์‹ฌ์œผ๋กœ ํ•œ Bell ํ‘œ์ค€์ด๋ฉฐ, ๋˜ ํ•˜๋‚˜๋Š” ์œ ๋Ÿฝ์„ ์ค‘์‹ฌ์œผ๋กœ ํ•œ CCITT(ITU-T) ํ‘œ์ค€์ด๋‹ค. ํŠนํžˆ CCITTํ‘œ์ค€์€ ๊ตญ์ œ์ ์ธ ๊ณต์‹ ๋ ฅ์„ ๊ฐ–๋Š” ๊ฒƒ์œผ๋กœ, CCITT์˜ V์‹œ๋ฆฌ์ฆˆ๋Š” ์•„๋‚ ๋กœ๊ทธ ์ „์†ก๋กœ๋ฅผ ์ด์šฉํ•œ ๋ฐ์ดํ„ฐ ํ†ต์‹  ๊ด€๋ จ ๊ถŒ๊ณ ์•ˆ์„ ๊ทœ์ •ํ•˜๊ณ  ์žˆ๋‹ค. ์ฃผํŒŒ์ˆ˜ ๋ณ€์กฐ, ์œ„์ƒ ๋ณ€์กฐ์˜ ๋ชจ์–‘์„ ๋ณด์˜€๋‹ค.
์—ฌ๋Ÿฌ๋ถ„์€ ์•ž์œผ๋กœ ์ž์‹ ์˜ ๋ชจ๋Ž€์ด๋‚˜ ์žก์ง€ ๋“ฑ์„ ๋ณด๋ฉด์„œ ์ด๋Ÿฌํ•œ ๊ถŒ๊ณ ์•ˆ์ด ํ‘œ๊ธฐ๋  ๋•Œ์—, ์ด๋“ค์˜ ๋ฐ์ดํ„ฐ ์ „์†ก์†๋„๋‚˜ ์ „์†ก๋ฐฉ์‹์— ๋”ฐ๋ผ ๋ถ„๋ฅ˜๋œ ๋ชจ๋Ž€ ํ‘œ์ค€๋“ค์ด๋ž€ ์‚ฌ์‹ค์„ ๊ธฐ์–ตํ•˜๊ธฐ ๋ฐ”๋ž€๋‹ค.
์•„๋ฌดํŠผ ์˜ค๋Š˜๋‚  ํ†ต์‹ ๊ธฐ์ˆ ์˜ ๋ฐœ๋‹ฌ์— ๋”ฐ๋ผ ๋งŽ์€ ํ†ต์‹ ์žฅ๋น„์˜ ๋””์ง€ํ„ธํ™”๊ฐ€ ์ง„ํ–‰๋˜์–ด ๊ฐ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์•„์ง๊นŒ์ง€๋„ ๊ฐ€์žฅ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋Š” ํ†ต์‹ ์žฅ๋น„ ์ค‘ ํ•˜๋‚˜๊ฐ€ ๋ฐ”๋กœ ๋ชจ๋Ž€์ด๋‹ค. ์—…๊ณ„์—์„œ ๋ชจ๋Ž€์„ ์‚ฌ์–‘ ์‚ฐ์—…์œผ๋กœ ๊ฐ„์ฃผํ•˜๊ณ  ์žˆ์Œ์—๋„ ๋ชจ๋Ž€์˜ ํŒ๋งค ์‹ค์ ์€ ๊พธ์ค€ํžˆ ์ง€์†๋˜๊ณ  ์žˆ๋‹ค.
ํŠนํžˆ ์˜ค๋Š˜๋‚  ๋ชจ๋Ž€์˜ ์‹œ์žฅ ๋™ํ–ฅ์„ ์‚ดํŽด๋ณด๋ฉด ๊ทธ ํ˜•ํƒœ๊ฐ€ ์ ์ฐจ ์†Œํ˜•ํ™”๋˜์–ด ๊ฐ€๊ณ  ์žˆ์œผ๋ฉฐ, ์ €๊ฐ€๊ฒฉ ์ €์†Œ๋น„ ์ „๋ ฅํ™” ๋ฐ ๊ณ ์†ํ™”๋ฅผ ์ถ”๊ตฌํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ๋ณธ์—ฐ์˜ ๋ณ€๋ณต์กฐ๊ธฐ๋Šฅ ์ด์™ธ์—๋„ ์—๋Ÿฌ์ œ์–ด๋ผ๋“ ๊ฐ€ ๋ฐ์ดํ„ฐ ์••์ถ•๊ณผ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ํ†ต์‹ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•˜๊ณ  ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค.
๊ฐ€๋ น CCITT์˜ V.34, 28,800bps๋ชจ๋Ž€์— ๋Œ€ํ•œ ๊ทœ๊ฒฉ์„ ๊ถŒ๊ณ ํ•˜๊ณ  ์žˆ๋Š”๋ฐ ์ตœ๊ทผ์—๋Š” ํ†ต์‹ ์†๋„์™€ ๊ณ ์†ํ™” ๋™ํ–ฅ์— ๋”ฐ๋ผ AT&T, Racal-Milgo๋“ฑ ๋งŽ์€ ๋ชจ๋Ž€ ์—…์ฒด์—์„œ ์ด ๊ทœ์ •์— ์ค€๊ฑฐํ•œ ๊ณ ์† ๋ชจ๋Ž€์„ ์ƒ์‚ฐํ•˜๊ณ  ์žˆ๋‹ค.
๋˜ ๊ตญ๋‚ด์—์„œ๋„ ๊ธˆ์œต์—…๊ณ„๋“ฑ์— ๋งŽ์ด ๋ณด๊ธ‰๋˜์–ด ์žˆ๋Š” MNP๋ชจ๋Ž€์€ ๋ณ€๋ณต์กฐ๊ธฐ๋Šฅ ์ด์™ธ์— ์—๋Ÿฌ์ œ์–ด ๊ธฐ๋Šฅ๊ณผ ๋ฐ์ดํ„ฐ ์••์ถ•๊ธฐ๋Šฅ์„ ๋ถ€๊ฐ€ํ•œ ๋‹ค๊ธฐ๋Šฅ ๋ชจ๋Ž€์ด๋‹ค. ์ตœ๊ทผ์— ๋ฐœํ‘œ๋œ 56Kbps ๋ชจ๋Ž€์€ ์•„์ง ๊ตญ์ œ ํ‘œ์ค€์ด ์ •ํ•ด์ง„ ์ƒํƒœ๊ฐ€ ์•„๋‹ˆ์–ด์„œ ๋‘ ํšŒ์‚ฌ(Rockwell๊ณผ US Robotics)์˜ ์ œํ’ˆ๊ฐ„์—๋Š” ํ˜ธํ™˜์„ฑ์ด ์—†๋‹ค.

๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์Œ์„ฑ์ •๋ณด์™€ ๊ฐ™์€ ์•„๋‚ ๋กœ๊ทธ ์ •๋ณด๋ฅผ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์ „์†กํ•  ์ˆ˜๋„ ์žˆ๋‹ค. ๋ฐ”๋กœ ์ด๋Ÿฌํ•œ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ์„œ์˜ ์ „์†กํ˜•ํƒœ๋ฅผ ๋””์ง€ํ„ธ ์ „์†ก์ด๋ผ ํ•œ๋‹ค.
๊ณต์ค‘์ „ํ™”๋ง์ด๋‚˜ ์‚ฌ์„ค ๊ตํ™˜๊ธฐ์˜ ๋””์ง€ํ‹€ํ™”์— ๋”ฐ๋ผ ์Œ์„ฑ์ •๋ณด๋ฅผ ์•„๋‚ ๋กœ๊ทธ ํ˜•ํƒœ ๊ทธ๋Œ€๋กœ๊ฐ€ ์•„๋‹Œ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•จ์œผ๋กœ์จ ๋ณด๋‹ค ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ๋‹ค.
๊ทธ๋ฆผ๊ณผ ๊ฐ™์ด ์•„๋‚ ๋กœ๊ทธ ํ˜•ํƒœ์ธ ์Œ์„ฑ์ •๋ณด๋ฅผ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๋˜ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ๋ถ€ํ„ฐ ๋‹ค์‹œ ์›๋ž˜์˜ ์Œ์„ฑ์ •๋ณด๋ฅผ ๋ณต์›ํ•ด๋‚ด๋Š” ๊ธฐ๊ธฐ๋ฅผ ์ฝ”๋ฑ(codec)์ด๋ผ ํ•œ๋‹ค.
์ฝ”๋ฑ์ด๋ž€ ๋ถ€ํ˜ธํ™”๊ธฐ(coder)์™€ ๋ณตํ˜ธํ™”๊ธฐ(decoder)์˜ ํ•ฉ์„ฑ์–ด์ด๋‹ค.
๋ฌผ๋ก  ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜๋œ ์ •๋ณด๋Š” ๊ทธ ์ƒํƒœ๋กœ ์ฒ˜๋ฆฌ๋  ์ˆ˜๋„ ์žˆ์œผ๋ฉฐ, ๋˜ ๊ทธ๋ฆผ 3-3์—์„œ์™€ ๊ฐ™์ด ๋‹ค์‹œ ๋ชจ๋Ž€์„ ๊ฑฐ์ณ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜๋œ ๋‹ค์Œ ์•„๋‚ ๋กœ๊ทธ ํ†ต์‹ ๋ง์„ ํ†ตํ•ด ์ „์†ก๋˜๊ธฐ๋„ ํ•œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ์ƒˆ๋กœ์šด ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋Š” ์›๋ž˜์˜ ์Œ์„ฑ์ •๋ณด์™€๋Š” ๋‹ฌ๋ฆฌ 2์ง„ํ˜•ํƒœ์ธ ๋””์ง€ํ„ธ ์‹ ํ˜ธ์˜ ์˜๋ฏธ๋ฅผ ๊ทธ๋Œ€๋กœ ํฌํ•จํ•˜์—ฌ ์ „์†ก๋˜๊ธฐ ๋•Œ๋ฌธ์—, ์›๋ž˜์˜ ์•„๋‚ ๋กœ๊ทธ ์Œ์„ฑ ์‹ ํ˜ธ์™€๋Š” ๊ทธ ๋ชจ์–‘์ด ๋‹ค๋ฅด๋‹ค.

์Œ์„ฑ์ •๋ณด๋ฅผ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฐ€์žฅ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š” ๋ฐฉ์‹์œผ๋กœ๋Š” ํŽ„์Šค์ฝ”๋“œ๋ณ€์กฐ(Pulse Code Modulation)๋ฐฉ์‹์„ ๋“ค ์ˆ˜ ์žˆ๋‹ค.
ํŽ„์Šค์ฝ”๋“œ๋ณ€์กฐ๋ฐฉ์‹์€ ๋จผ์ € ์Œ์„ฑ์ •๋ณด๋ฅผ ์ผ์ •๊ฐ„๊ฒฉ์˜ ์‹œ๊ฐ„์œผ๋กœ ์ƒ˜ํ”Œ๋งํ•˜์—ฌ ํŽ„์Šค์ง„ํญ๋ณ€์กฐ ์‹ ํ˜ธ๋ฅผ ์–ป์€ ๋‹ค์Œ, ์ด๋ฅผ ๋‹ค์‹œ ์–‘์žํ™”๊ธฐ๋ฅผ ๊ฑฐ์ณ ๊ฐ ์ง„ํญ๊ฐ’์„ ํ‰์ค€ํ™”ํ•˜๊ณ , ์ด ์–‘์žํ™”๋œ ๊ฐ’์— 2์ง„ ๋ถ€ํ˜ธ๊ฐ’์„ ํ• ๋‹นํ•จ์œผ๋กœ์จ ์ˆ˜ํ–‰๋œ๋‹ค.
์ƒ˜ํ”Œ๋ง์ด๋ž€ ์Œ์„ฑ์ •๋ณด๋ฅผ ์ผ์ •์‹œ๊ฐ„ ๊ฐ„๊ฒฉ, ์ฆ‰ ์ƒ˜ํ”Œ๋กœ ๋‚˜๋ˆ„์–ด ๊ฐ ์ƒ˜ํ”Œ๋งˆ๋‹ค ์ง„ํญ๊ฐ’์„ ๋ถ€์—ฌํ•˜๋Š” ๊ฒƒ์œผ๋กœ, ์ด๋ฅผ ํŽผ์Šค์ง„ํญ๋ณ€์กฐ(Pulse Amplitude Modulation)๋ฐฉ์‹์ด๋ผ๊ณ ๋„ ํ•œ๋‹ค.
๋˜ ์–‘์žํ™”๋ž€ ์ƒ˜ํ”Œ๋ง์— ์˜ํ•ด ์–ป์–ด์ง„ ์ง„ํญ๊ฐ’์„ ํ‰์ค€ํ™”ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์˜ˆ๋ฅผ ๋“ค๋ฉด 1.08์˜ ๊ฐ’์„ 1.1๋กœ์„œ ํ‘œํ˜„ํ•˜๋Š” ๊ฒƒ์„ ๋งํ•œ๋‹ค.
์ด ๊ณผ์ •์—์„œ ์ƒ๊ธฐ๋Š” ์˜ค์ฐจ๋ฅผ ์–‘์žํ™” ์—๋Ÿฌ๋ผ ํ•œ๋‹ค.
์–‘์žํ™”์— ์˜ํ•ด ์–ป์–ด์ง„ ๋ชจ๋“  ์ง„ํญ๊ฐ’์„ ๋“ฑ๊ฐ„๊ฒฉ์œผ๋กœ ๊ตฌ๋ถ„ํ•œ ๋‹ค์Œ, ์ด๋“ค ๊ฐ ๊ฐ’์— 2์ง„๊ฐ’์„ ํ• ๋‹นํ•˜๊ฒŒ ๋œ๋‹ค.
์‚ฌ๋žŒ์˜ ์Œ์„ฑ์„ PCM ์ „์†กํ•˜๋Š” ๊ฒฝ์šฐ ์ „์†ก ์†๋„๋Š” 64Kbps๊ฐ€ ๋œ๋‹ค.
์ด๋Š” ์‚ฌ๋žŒ์˜ ์Œ์„ฑ์„ 1์ดˆ์— 8000๋ฒˆ ์กฐ์‚ฌ(์ƒ˜ํ”Œ๋ง)ํ•˜๊ณ  ํ•œ ๋ฒˆ ์กฐ์‚ฌ์—์„œ ์–ป์–ด์ง„ ๊ฐ’์„ 8๋น„ํŠธ๋กœ ํ‘œํ˜„ํ•˜์—ฌ ์ „์†กํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์–ป์–ด์ง€๋Š” ์†๋„์ด๋‹ค.๊ทธ๋ฆผ3-4์— PCM(ํŽ„์Šค์ฝ”๋“œ๋ณ€์กฐ)๊ณผ์ •์„ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค.
๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ๋””์ง€ํ„ธ ์‹ ํ˜ธํ˜•ํƒœ๋ฅผ ์ˆ˜์šฉํ•˜๋Š” ํ†ต์‹ ๋กœ์ƒ์— ์‹ฃ๊ธฐ ์œ„ํ•ด์„œ๋Š” ์›๋ž˜์˜ ๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ๊ทธ๋Œ€๋กœ ๋ณด๋‚ผ ์ˆ˜ ๋„ ์žˆ์ง€๋งŒ, ์ „์†กํŠน์„ฑ์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด์„œ ์ผ๋ฐ˜์ ์œผ๋กœ ๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ๋‹ค๋ฅธ ๋””์ง€ํ„ธ ์‹ ํ˜ธํ˜•ํƒœ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์ „์†กํ•˜๊ธฐ๋„ ํ•œ๋‹ค.
์ด ๊ฒฝ์šฐ๋Š” ์ „์†ก๋งค์ฒด๊ฐ€ ์•„๋‚ ๋กœ๊ทธ๋‚˜ ๋””์ง€ํ„ธ ์‹ ํ˜ธํ˜•ํƒœ๋งŒ์„ ์ˆ˜์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ •๋ณด๋ฅผ ๋ณ€ํ™˜ํ•˜๋Š” ๋ชจ๋Ž€์ด๋‚˜ ์ฝ”๋ฑ๊ณผ๋Š” ๋‹ฌ๋ฆฌ, ํ†ต์‹ ๋กœ๋ฅผ ํ†ตํ•œ ์ „์†กํŠน์„ฑ์„ ๋ณด๋‹ค ์ข‹๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด์„œ ์‹ ํ˜ธ ํ˜•ํƒœ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ฒŒ ๋œ๋‹ค.
๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ๋˜ ๋‹ค๋ฅธ ํ˜•ํƒœ์˜ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋กœ์„œ ์ „์†กํ•˜๋Š” ๊ฒฝ์šฐ๋Š” ๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์ „์†กํ•˜๋Š” ๋ชจ๋Ž€๋ณด๋‹ค ํšŒ๋กœ ๊ตฌ์„ฑ์ด ๋” ๊ฐ„๋‹จํ•˜๊ณ  ๋น„์šฉ๋„ ๋” ์ ๊ฒŒ ๋“ ๋‹ค.
์˜ˆ๋ฅผ ๋“ค์–ด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๊ตฌํ˜„์œผ๋กœ '1'๊ณผ '0'์˜ 2์ง„๊ฐ’ ๊ฐ๊ฐ์„ ๋ถ€(-)์˜ ์ „์•• ๊ฐ’๊ณผ ์–‘(+)์˜ ์ „์•• ๊ฐ’์œผ๋กœ ํ‘œํ˜„ํ•˜๋Š” ๊ฒƒ์ด ์žˆ๋‹ค. ๋ฐ”๋กœ ์ด ๋ถ€ํ˜ธ๊ฐ€ NRZ-L(Nonreturn-to-Zero-Level)์ด๋‹ค. ์ด ๋ช…์นญ์€ ํ•œ ๋น„ํŠธ๊ตฌ๊ฐ„๋‚ด์—์„œ๋Š” ๊ฒฐ์ฝ” 0๋ ˆ๋ฒจ๋กœ ๋ฆฌํ„ด ๋˜์ง€ ์•Š์Œ์„ ์˜๋ฏธํ•œ๋‹ค. ํ•˜์ง€๋งŒ ์ด ๋ถ€ํ˜ธ๋ฐฉ์‹์€ ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๋‹ค๋Š” ์ด์ ์€ ์žˆ์ง€๋งŒ, ๋น„ํŠธ์™€ ๋น„ํŠธ๊ฐ„์˜ ๊ตฌ๋ณ„์ด ์–ด๋ ค์›Œ ์—๋Ÿฌ๋ฐœ์ƒํ™•๋ฅ ์ด ๋งŽ์œผ๋ฉฐ ๋™๊ธฐ๋ฅผ ์žƒ์„ ์šฐ๋ ค๊ฐ€ ํฌ๋‹ค.

ํŽ„์Šค ์ฝ”๋“œ ๋ณ€์กฐ๊ณผ์ •

๋”ฐ๋ผ์„œ ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๊ทธ๋ฆผ 3-5์—์„œ์™€ ๊ฐ™์€ ๋งจ์ฒด์Šคํ„ฐ์™€ ์ฐจ๋™๋งจ์ฒด์Šคํ„ฐ ๋ถ€ํ˜ธ๋ฐฉ์‹์„ ๋งŽ์ด ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค. ์ด๋“ค ๋ถ€ํ˜ธ๋ฐฉ์‹์€ ๊ทธ๋ฆผ์—์„œ์™€ ๊ฐ™์ด ํ•œ ๋น„ํŠธ๋‚ด์— ์‹ ํ˜ธ๋ณ€ํ™”๋ฅผ ๋ถ€์—ฌํ•จ์œผ๋กœ์จ ์—๋Ÿฌ๊ฒ€์ถœ๊ณผ ๋™๊ธฐํ™”๋ฅผ ์‰ฝ๊ฒŒ ํ•˜๊ณ  ์žˆ๋‹ค.

๋งจ์ฒด์Šคํ„ฐ ์ฝ”๋“œ์™€ ์ฐจ๋™๋งจ์ฒด์Šคํ„ฐ ์ฝ”๋“œ

๋””์ง€ํ„ธ ์ •๋ณด์˜ ๋””์ง€ํ„ธ ์ „์†ก์„ ๋‹ด๋‹นํ•˜๋Š” ๊ธฐ๊ธฐ๋ฅผ ์ผ๋ฐ˜์ ์œผ๋กœ ๋””์ง€ํ„ธ ์„œ๋น„์Šค ์œ ๋‹ˆํŠธ(Digital Service Unit)๋ผ ํ•œ๋‹ค.(๊ทธ๋ฆผ 3-6 ์ฐธ์กฐ)
์•„๋‚ ๋กœ๊ทธ ์ •๋ณด๋Š” ๊ฐ™์€ ๋Œ€์—ญํญ์„ ๊ฐ–๋Š” ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ์‰ฝ๊ฒŒ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ๋‹ค. ๊ฐ€์žฅ ๋Œ€ํ‘œ์ ์ธ ์˜ˆ๊ฐ€ ์Œ์„ฑ์œผ๋กœ์„œ, ์ด๋Š” 300์—์„œ 3400Hz ๋ฒ”์œ„์˜ ์‚ฌ๋žŒ์˜ ๋ชฉ์†Œ๋ฆฌ์ธ ์ŒํŒŒ๋ฅผ ๊ฐ™์€ ์ฃผํŒŒ์ˆ˜๋Œ€๋ฅผ ๊ฐ–๋Š” ์ „์ž์ ์ธ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค, ์ด ์‹ ํ˜ธ๋Š” ๋ฐ”๋กœ ์ง์ ‘ ์ „ํ™”์„ ์„ ํ†ตํ•ด ์ „์†ก๋˜๋Š” ๊ฒƒ์ด๋‹ค.

DSU์˜ ๊ธฐ๋Šฅ

๋˜ํ•œ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ์˜ ์ „์†กํŠน์„ฑ์„ ๋ณด๋‹ค ์ข‹๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ๋””์ง€ํ„ธ ์ •๋ณด์˜ ์•„๋‚ ๋กœ๊ทธ ์ „์†ก๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์•„๋‚ ๋กœ๊ทธ ์ •๋ณด์˜ ์ง„ํญ, ์ฃผํŒŒ์ˆ˜ ๋“ฑ์˜ ํ•œ ์š”์†Œ๋ฅผ ๋ณ€ํ™”ํ•จ์œผ๋กœ์จ ์•„๋‚ ๋กœ๊ทธ ์ •๋ณด ์—ญ์‹œ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ๋‹ค.
์ง„ํญ๋ณ€์กฐ๋Š” ์•„๋‚ ๋กœ๊ทธ ์ •๋ณด์— ๋”ฐ๋ผ ๋ฐ˜์†กํŒŒ์˜ ์ง„ํญ์„ ๋ณ€ํ™˜์‹œํ‚ค๋Š” ๊ฒƒ์ด๋ฉฐ, ์ฃผํŒŒ์ˆ˜ ๋ณ€์กฐ๋Š” ์ •๋ณด์— ๋”ฐ๋ผ ๋ฐ˜์†กํŒŒ์˜ ์ฃผํŒŒ์ˆ˜๋ฅผ ๋ณ€ํ™˜์‹œํ‚ค๋Š” ๋ฐฉ์‹์ด๋‹ค.
์ด๋Ÿฌํ•œ ๋ฐฉ์‹์€ ๋ฐฉ์†ก์— ์ด์šฉ๋˜์–ด ๊ฐ๊ฐ ์ง„ํญ ๋ณ€์กฐ(AM), ์ฃผํŒŒ์ˆ˜ ๋ณ€์กฐ(FM) ๋ฐฉ์†ก์œผ๋กœ ๋ถˆ๋ฆฌ์šฐ๊ณ  ์žˆ๋‹ค.
์ด์ƒ ๊ฐ ์ •๋ณด์˜ ํ˜•ํƒœ๋ฅผ ๋””์ง€ํ„ธ ์‹ ํ˜ธ์™€ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ์„œ ์ „์†กํ•˜๋Š” ๋””์ง€ํ„ธ ์ „์†ก๊ณผ ์•„๋‚ ๋กœ๊ทธ ์ „์†ก์— ๋Œ€ํ•ด์„œ ์‚ดํŽด๋ณด์•˜๋‹ค.
์•ž์—์„œ๋„ ์–ธ๊ธ‰ํ•œ ๋ฐ”์™€ ๊ฐ™์ด ์‹ ํ˜ธ๋ณ€ํ™˜์„ ๊ฑฐ์ณ ์ •๋ณด๋ฅผ ์ „์†กํ•˜๋Š” ์ด์œ ๋Š” ๋ฌด์—‡๋ณด๋‹ค๋„ ์ „๋‹ฌํ•˜๊ณ ์ž ํ•˜๋Š” ์ •๋ณด๋ฅผ ์ƒ๋Œ€์—๊ฒŒ ๋ณด๋‹ค ์ •ํ™•ํ•˜๊ณ  ์‹ ๋ขฐ์„ฑ ์žˆ๊ฒŒ ์ „์†กํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์ด๋‹ค.

์•„๋‚ ๋กœ๊ทธ ์ „์†ก๋ฐฉ์‹์€ ์ „์†ก๋˜๋Š” ์‹ ํ˜ธํ˜•ํƒœ๊ฐ€ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋กœ์„œ , ๊ทธ ์‹ ํ˜ธ๋‚ด์— ํฌํ•จ๋œ ์ •๋ณด์˜ ๋‚ด์šฉ๊ณผ๋Š” ๋ฌด๊ด€ํ•˜๊ฒŒ ๋‹จ์ง€ ์•„๋‚ ๋กœ๊ทธ ์‹ ํ˜ธ๋งŒ์„ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์ด๋‹ค.
๋”ฐ๋ผ์„œ ๋จผ ๊ฑฐ๋ฆฌ๋ฅผ ์ „์†กํ•˜๊ณ ์ž ํ•  ๋•Œ๋Š” ์‹ ํ˜ธ์˜ ๊ฐ์‡„์˜ํ–ฅ์„ ๋ณด์ƒํ•˜๊ธฐ ์œ„ํ•ด์„œ ์ฆํญ๊ธฐ์— ์˜ํ•ด ์‹ ํ˜ธ๋ฅผ ์ค‘๊ฐ„์— ๋‹ค์‹œ ์žฌ์ฆํญํ•˜์—ฌ ์ „์†กํ•˜์—ฌ์•ผ๋งŒ ํ•œ๋‹ค.
๊ทธ๋Ÿฐ๋ฐ ์ด ์ฆํญ ์‹œ์—๋Š” ์ •๋ณด๋ฅผ ์‹ฃ๊ณ  ์žˆ๋Š” ์‹ ํ˜ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์—ฌ๋ฆฌ๊ฒŒ ํฌํ•จ๋œ ์žก์Œ๊นŒ์ง€๋„ ๊ฐ™์ด ์ฆํญ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋””์ง€ํ„ธ ์ •๋ณด์˜ ์ „์†ก์‹œ์—๋Š” ํŠนํžˆ ์—๋Ÿฌ์˜ ํ™•๋ฅ ์ด ๋†’๋‹ค๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค.
๊ทธ๋Ÿฌ๋‚˜ ๋ฐ˜๋ฉด์— ๋””์ง€ํ„ธ ์ „์†ก๋ฐฉ์‹์€ ์ „์†ก์‹ ํ˜ธ์— ํฌํ•จ๋œ ๋‚ด์šฉ์„ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์ „์†ก๋„์ค‘์— ์žก์Œ์ด ์žˆ๋‹ค๊ณ  ํ•˜๋”๋ผ๋„ ์ค‘๊ณ„๊ธฐ์— ์˜ํ•ด ์›๋ž˜์˜ ์‹ ํ˜ธ๋‚ด์šฉ์„ ๋‹ค์‹œ ๋ณต์›ํ•œ ๋‹ค์Œ ์ „์†กํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ „์†ก๋งค์ฒด์— ์˜ํ•œ ๊ฐ์‡„์˜ํ–ฅ์„ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋‹ค.

๋˜ํ•œ ์ด์™ธ์— ๋””์ง€ํ„ธ ์ „์†ก๋ฐฉ์‹์ด ๊ฐ–๋Š” ์ด์ ์„ ํ‘œ 3-2์— ์ •๋ฆฌํ•˜์˜€๋‹ค. ์ด ํ‘œ์— ์ •๋ฆฌ๋œ ๋Œ€๋กœ ๋””์ง€ํ„ธ ์ „์†ก์€ ์•„๋‚ ๋กœ๊ทธ ์ „์†ก์— ๋น„ํ•ด 1์žฅ์—์„œ ์†Œ๊ฐœํ•œ ํ†ต์‹ ์˜ 3๋Œ€ ๋ชฉํ‘œ ๋ชจ๋“  ๋ถ„์•ผ์— ๊ฑธ์ณ ์šฐ์ˆ˜ํ•œ ํŠน์„ฑ์„ ๋ณด์ด๊ณ  ์žˆ๋‹ค.

์ปดํ“จํ„ฐ์— ์˜ํ•œ ๋ฐ์ดํ„ฐ ์ •๋ณด๋ฅผ ๋‹ค๋ฃจ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๊ธฐ๊ธฐ๋Š” ๋ชจ๋“  2์ง„ํ˜•ํƒœ์˜ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋ฅผ ๋‹ค๋ฃจ๊ณ  ์žˆ์œผ๋ฉฐ, ๋””์ง€ํ„ธ ์ „์†ก๋ฐฉ์‹์ด ์•„๋‚ ๋กœ๊ทธ ์ „์†ก๋ฐฉ์‹์— ๋น„ํ•ด ๋” ๋งŽ์€ ์ด์ ์„ ๊ฐ–์ถ”๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ปดํ“จํ„ฐ๋ฅผ ์ด์šฉํ•œ ์ •๋ณดํ†ต์‹ ์— ์žˆ์–ด์„œ๋Š” ๋””์ง€ํ„ธ ์ „์†ก๋ฐฉ์‹์„ ์ด์šฉํ•˜๋Š” ๊ฒƒ์ด ๋ฐ”๋žŒ์งํ•˜๋‚˜ ๊ธฐ์กด ์Œ์„ฑ์ „๋‹ฌ์„ ์œ„ํ•ด ์„ค์น˜๋˜์–ด ์žˆ๋Š” ์ „ํ™”๋ง์˜ ๊ทœ๋ชจ๊ฐ€ ๋ฐฉ๋Œ€ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฅผ ์ด์šฉํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ์•„๋‚ ๋กœ๊ทธ ์ „์†ก๋ฐฉ์‹์„ ์ฑ„ํƒํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์€ ์‹ค์ •์ด๋‹ค.

๊ทธ๋Ÿฌ๋‚˜ ์ตœ๊ทผ์—๋Š” ๋ฐ์ดํ„ฐํ†ต์‹ ์ˆ˜์š”๊ฐ€ ์ฆ๊ฐ€ํ•จ์— ๋”ฐ๋ผ ๋””์ง€ํ„ธ ์ „์†ก์„ ์œ„ํ•œ ํ†ต์‹ ๋ง ์„œ๋น„์Šค๊ฐ€ ์ ์ฐจ ํ™•๋Œ€๋  ๊ฒƒ์œผ๋กœ ๋ณด์ด๋ฉฐ, ์ •๋ณดํ™” ์‚ฌํšŒ๊ฐ€ ์ง€ํ–ฅํ•˜๋Š” ์ข…ํ•ฉ์ •๋ณดํ†ต์‹ ๋ง์€ ๋””์ง€ํ„ธ ์‹ ํ˜ธ๋ฅผ ๋ฐ”๋กœ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๋Š” ๋””์ง€ํ„ธ ํ†ต์‹ ๋ง์œผ๋กœ์„œ, ์ด์˜ ๊ตฌ์ถ•๊ณผ ํ•จ๊ป˜ ๋””์ง€ํ„ธ ์ „์†ก๋ฐฉ์‹์˜ ์ด์šฉ์€ ๋”์šฑ ํ™•๋Œ€๋  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋œ๋‹ค.

์ •๋ณด๊ฐ€ ์ถ”์ง„์ค‘์ธ ์ดˆ๊ณ ์† ์ข…ํ•ฉ ์ •๋ณด ํ†ต์‹ ๋ง์˜ ๊ตฌ์ถ•์€ ๊ฒฐ๊ตญ ๊ตญ๋‚ด์˜ ๋ชจ๋“  ํ†ต์‹ ์—์„œ ์ „์†ก ๋ฐฉ์‹์„ ๋””์ง€ํ„ธํ™” ํ•˜๊ธฐ ์œ„ํ•œ ๋…ธ๋ ฅ์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ๋‹ค.


๋””์ง€ํ„ธ ์ •๋ณด๋ฅผ ์ „์†กํ•˜๋Š” ๋ฐ๋Š” ์ง๋ ฌ ์ „์†ก๊ณผ ๋ณ‘๋ ฌ ์ „์†ก์˜ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ๋‹ค. ์ง๋ ฌ์ „์†ก์€ ํ•˜๋‚˜์˜ ๋ฌธ์ž๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ๊ฐ ๋น„ํŠธ๋“ค์ด ํ•˜๋‚˜์˜ ์ „์†ก์„ ์„ ํ†ตํ•˜์—ฌ ์ˆœ์„œ์ ์œผ๋กœ ์ „์†ก๋˜๋ฉฐ ๋ณ‘๋ ฌ์ „์†ก์—์„œ๋Š” ๊ฐ ๋น„ํŠธ๋“ค์ด ์—ฌ๋Ÿฌ๊ฐœ์˜ ์ „์†ก์„ ์„ ํ†ตํ•˜์—ฌ ๋™์‹œ์— ์ „์†ก๋œ๋‹ค.
๊ฐ€๋ น ํ•˜๋‚˜์˜ ๋ฌธ์ž๊ฐ€ 8๋น„ํŠธ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋‹ค๋ฉด ๋ณ‘๋ ฌ์ „์†ก์— ํ•„์š”ํ•œ ์ „์†ก์„ ์€ ์ตœ์†Œํ•œ 8๊ฐœ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ๋ณ‘๋ ฌ์ „์†ก ๋ฐฉ์‹์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ปดํ“จํ„ฐ์™€ ์ฃผ๋ณ€๊ธฐ๊ธฐ ์‚ฌ์ด์˜ ๋ฐ์ดํ„ฐ ์ „์†ก์„ ์œ„ํ•ด์„œ ์‚ฌ์šฉ๋˜๋ฉฐ, ๊ฑฐ๋ฆฌ๊ฐ€ ๋ฉ€์–ด์ง€๋ฉด ์ „์†ก๋กœ์˜ ๋น„์šฉ ๋ถ€๋‹ด ๋•Œ๋ฌธ์— ๊ฑฐ์˜ ์ด์šฉ๋˜์ง€ ์•Š๊ณ  ์žˆ๋‹ค.

๋”ฐ๋ผ์„œ ์ •๋ณดํ†ต์‹ ์— ์žˆ์–ด์„œ๋Š” ๋Œ€๋ถ€๋ถ„ ์ง๋ ฌ์ „์†ก๋ฐฉ์‹์„ ๊ทธ ๋Œ€์ƒ์œผ๋กœ ํ•˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋ฆผ 3-7์—๋Š” ์ง๋ ฌ ์ „์†ก๊ณผ ๋ณ‘๋ ฌ ์ „์†ก์„ ๋น„๊ตํ•˜์—ฌ ๋ณด์ด๊ณ  ์žˆ๋‹ค.

์ง๋ ฌ ์ „์†ก ๋ฐฉ์‹๊ณผ ๋ณ‘๋ ฌ ์ „์†ก ๋ฐฉ์‹


์†ก์ˆ˜์‹ ์ž๊ฐ„์— ๋ฌธ์ž๋ฅผ ์ „๋‹ฌํ•˜๋Š”๋ฐ ์žˆ์–ด์„œ ๋งŒ์ผ ๊ฐ ๋ฌธ์ž์˜ ์‹œ์ž‘๊ณผ ๋์ด ์ •ํ™•ํ•˜๊ฒŒ ์ •์˜๋˜์–ด ์žˆ์ง€ ์•Š๋‹ค๋ฉด ์†ก์‹ ์ž๊ฐ€ ๋ณด๋‚ธ ๋ฌธ์ž๋Š” ์ˆ˜์‹ ์ž๊ฐ€ ์ •ํ™•ํ•˜๊ฒŒ ์ธ์‹ํ•  ์ˆ˜ ์—†๊ฒŒ ๋œ๋‹ค.
๋”ฐ๋ผ์„œ ์†ก์ˆ˜์‹ ์ž ์‚ฌ์ด์—๋Š” ๋ฌธ์ž๋ฅผ ์ „์†กํ•˜๋Š” ์ •ํ™•ํ•œ ๋ฐฉ๋ฒ•์ด ์‚ฌ์ „์— ์ •์˜๋˜์–ด์•ผ ํ•˜๊ณ  ๋น„๋™๊ธฐ์‹ ๋ฐฉ์‹๊ณผ ๋™๊ธฐ์‹ ๋ฐฉ์‹์ด ๋ฐ”๋กœ ๋„๋ฆฌ ์ด์šฉ๋˜๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ์‹์ด๋‹ค.
๋น„๋™๊ธฐ์‹ ์ „์†ก๋ฐฉ์‹์€ ๋ณดํ†ต ํ•œ ๋ฌธ์ž๋‹จ์œ„์™€ ๊ฐ™์ด ๋งค์šฐ ์ž‘์€ ๋น„ํŠธ๋ธ”๋Ÿญ์˜ ์•ž๊ณผ ๋’ค์— ๊ฐ๊ฐ ์Šคํƒ€ํŠธ๋น„ํŠธ์™€ ์Šคํ†ฑ๋น„ํŠธ๋ฅผ ์‚ฝ์ž…ํ•˜์—ฌ ๋น„ํŠธ๋ธ”๋Ÿญ์˜ ๋™๊ธฐํ™”๋ฅผ ์ทจํ•ด์ฃผ๋Š” ๋ฐฉ์‹์œผ๋กœ์„œ, ์Šคํƒ€ํŠธ-์Šคํ†ฑ์ „์†ก์ด๋ผ๊ณ  ๋ถˆ๋ฆฌ๊ธฐ๋„ ํ•œ๋‹ค.
5๋น„ํŠธ์—์„œ 8๋น„ํŠธ๊นŒ์ง€์˜ ํ•œ ๋ฌธ์ž๋‹จ์œ„๋งˆ๋‹ค ์ „ํ›„์— ๋ฌธ์ž์˜ ์‹œ์ž‘๊ณผ ๋์„ ์•Œ๋ฆฌ๋Š” ์Šคํƒ€ํŠธ๋น„ํŠธ์™€ ์Šคํ†ฑ๋น„ํŠธ๋ฅผ ๋‘๊ณ , ๋งค๋ฌธ์ž ๋‹จ์œ„๋กœ ์ „์†กํ•˜๋Š” ๋ฐฉ์‹์ด๋‹ค.

์ผ๋ก€๋กœ ํ…”๋ ˆํƒ€์ดํ”„ํ˜• ํ„ฐ๋ฏธ๋„์€ ๋Œ€๋ถ€๋ถ„ ๋น„๋™๊ธฐ์‹์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์†กํ•˜๋Š” ๊ฒƒ์œผ๋กœ ํ‚ค๋ณด๋“œ ํ•˜๋‚˜๋ฅผ ๋ˆ„๋ฅผ ๋•Œ๋งˆ๋‹ค ํ•œ ๋ฌธ์ž๊ฐ€ ์ „์†ก๋œ๋‹ค.
๋”ฐ๋ผ์„œ ๊ฐ ๋ฌธ์ž์™€ ๋ฌธ์ž์˜ ์ „์†ก์‚ฌ์ด์—๋Š” ํœด์ง€๊ธฐ๊ฐ„์ด ์กด์žฌํ•˜์—ฌ, ํœด์ง€ ๊ธฐ๊ฐ„ ๋™์•ˆ์—๋Š” ์Šคํ†ฑ๋น„ํŠธ์™€ ๊ฐ™์€ ๋น„ํŠธ๊ฐ€ ๊ณ„์† ์ „์†ก๋œ๋‹ค. ์ˆ˜์‹ ์ธก์ด ์†ก์‹ ์ธก๋ณด๋‹ค ๋งŒ์ผ 5%์ •๋„ ๋น ๋ฅด๊ฑฐ๋‚˜ ๋Š๋ฆฌ๋ฉด ์—ฌ๋Ÿ ๋ฒˆ์งธ ๋น„ํŠธ๋Š” 45%(5%×9๋น„ํŠธ)๋น ๋ฅด๊ฑฐ๋‚˜ ๋Š๋ฆฌ๊ฒŒ ๋˜์ง€๋งŒ ์ •๋ณด๋Š” ์ •ํ™•ํ•˜๊ฒŒ ์ธ์‹๋  ์ˆ˜ ์žˆ๋‹ค.
๊ทธ๋Ÿฌ๋‚˜ ๋งŒ์ผ ํ•œ ๋ฌธ์ž๋ฅผ ์ด๋ฃจ๋Š” ๋น„ํŠธ๋ธ”๋Ÿญ์˜ ํฌ๊ธฐ๊ฐ€ 8๋น„ํŠธ๋ณด๋‹ค ํฌ๊ฑฐ๋‚˜ ์†ก์ˆ˜์‹ ์ธก๊ฐ„์˜ ํƒ€์ด๋ฐ ์˜ค์ฐจ๊ฐ€ 5%๋ณด๋‹ค ๋” ํฌ๋‹ค๋ฉด ํƒ€์ด๋ฐ ์—๋Ÿฌ๊ฐ€ ๋ฐœ์ƒํ•˜๊ฒŒ ๋œ๋‹ค.
์ฆ‰, ๋งˆ์ง€๋ง‰์— ์ƒ˜ํ”Œ๋ง๋œ ๋น„ํŠธ๋Š” ์ž˜๋ชป ์ธ์‹๋˜๋ฉฐ, ๋น„ํŠธ ์นด์šดํŠธ๋„ ํ‹€๋ ค์ง„๋‹ค.
๋งŒ์ผ ๋น„ํŠธ7์ด 0์ด๊ณ  ๋น„ํŠธ 8์ด 1์ด์—ˆ๋‹ค๋ฉด ๋น„ํŠธ8์€ ์‹œ์ž‘๋น„ํŠธ๋กœ ์ž˜๋ชป ์ธ์‹๋˜๋Š”๋ฐ, ๋ฐ”๋กœ ์ด๋Ÿฌํ•œ ์—๋Ÿฌ๋ฅผ ํ”„๋ ˆ์ด๋ฐ์—๋Ÿฌ(framing error)๋ผ ํ•œ๋‹ค.
์ผ๋ฐ˜์ ์œผ๋กœ ๋น„๋™๊ธฐ์‹ ์ „์†ก๋ฐฉ์‹์€ ๋‹จ์ˆœํ•˜๊ณ  ์ €๋ ดํ•˜๋‚˜, ๊ฐ ๋ฌธ์ž๋‹น ์Šคํƒ€ํŠธ ๋น„ํŠธ์™€ ์Šคํ†ฑ๋น„ํŠธ๋ฅผ ๋น„๋กฏํ•ด 2-3๋น„ํŠธ์˜ ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์š”๊ตฌํ•˜๋ฏ€๋กœ ์ „์†กํšจ์œจ์ด ๋งค์šฐ ๋–จ์–ด์ง€๋Š” ๊ฒƒ์œผ๋กœ ๋ณดํ†ต ๋‚ฎ์€ ์ „์†ก์†๋„์—์„œ ์ด์šฉ๋œ๋‹ค.

ํ•œํŽธ ๋™๊ธฐ์‹ ์ „์†ก๋ฐฉ์‹์€ ํ•œ ๋ฌธ์ž๋‹จ์œ„๊ฐ€ ์•„๋‹ˆ๋ผ ์—ฌ๋Ÿฌ๋ฌธ์ž๋ฅผ ์ˆ˜์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ ๋ธ”๋ก ๋‹จ์œ„๋กœ์„œ ์ „์†กํ•˜๋Š” ๋ฐฉ์‹์ด๋‹ค.
์ด ๋ฐฉ์‹์€ ์–‘์ธก์— ์„ค์น˜๋œ ๋ชจ๋Ž€์ด๋‚˜ ๋‹ค์ค‘ํ™”๊ธฐ ๋“ฑ๊ณผ ๊ฐ™์€ ๊ธฐ๊ธฐ์— ์˜ํ•ด ํƒ€์ด๋ฐ์ด ๊ณต๊ธ‰๋˜๋ฉฐ, ๋™๊ธฐ๋ฌธ์ž๋‚˜ ํ”Œ๋ž˜๊ทธ ๋“ฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ์†ก์ˆ˜์‹ ์ธก๊ฐ„์˜ ๋ฐ์ดํ„ฐ ๋ธ”๋ก์„ ์ˆ˜์‹ ํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํ„ฐ๋ฏธ๋„์—๋Š” ๋ฐ˜๋“œ์‹œ ๋ฒ„ํผ์žฅ์น˜๊ฐ€ ์š”๊ตฌ๋˜๋ฉฐ ๋ณดํ†ต 2000bps์ด์ƒ์˜ ์ „์†ก์†๋„์—์„œ ์‚ฌ์šฉ๋œ๋‹ค.
์ผ๋ฐ˜์ ์œผ๋กœ ์ƒ๋‹นํžˆ ํฐ ๋ฐ์ดํ„ฐ๋ธ”๋ก ๊ฒฝ์šฐ์—๋Š” ๋™๊ธฐ์‹ ์ „์†ก๋ฐฉ์‹์ด ๋น„๋™๊ธฐ์‹ ์ „์†ก๋ฐฉ์‹ ๋ณด๋‹ค ์ „์†กํšจ์œจ์— ์žˆ์–ด์„œ๋„ ๋” ์ข‹์€ ์„ฑ๋Šฅ์„ ๊ฐ–๊ฒŒ ๋œ๋‹ค. ๋”ฐ๋ผ์„œ ์ •๋ณดํ†ต์‹ ์— ์žˆ์–ด์„œ ์‚ฌ์šฉ๋˜๋Š” ๋Œ€๋ถ€๋ถ„์˜ ํ†ต์‹  ํ”„๋กœํ† ์ฝœ์—์„œ๋Š” ๋™๊ธฐ์‹ ์ „์†ก๋ฐฉ์‹์„ ์ด์šฉํ•˜๊ณ  ์žˆ๋‹ค.

๊ทธ๋ฆผ 3-8์—์„œ ๋ณด์ธ ๋ฐ”์™€ ๊ฐ™์ด ๋™๊ธฐ์‹ ์ „์†ก ๋ฐฉ์‹์€ ๋‹ค์‹œ ๋ฌธ์ž์ง€ํ–ฅํ˜•๊ณผ ๋น„ํŠธ์ง€ํ–ฅํ˜•์œผ๋กœ ๋‚˜๋‰˜์–ด์ง€๋Š”๋ฐ ์ตœ๊ทผ์—๋Š” ๋Œ€๋ถ€๋ถ„ ๋ฌธ์ž์ง€ํ–ฅํ˜• ๋™๊ธฐ์‹ ์ „์†ก์„ ์ฑ„ํƒํ•˜๊ณ  ์žˆ๋‹ค.
HDLC, SDLC๋“ฑ์ด ๋น„ํŠธ์ง€ํ–ฅํ˜• ๋™๊ธฐ์‹ ์ „์†ก ํ”„๋กœํ† ์ฝœ์˜ ์ „ํ˜•์ ์ธ ์˜ˆ๊ฐ€ ๋œ๋‹ค.

๋™๊ธฐ์‹ ์ „์†ก๊ณผ ๋น„๋™๊ธฐ์‹ ์ „์†ก

์šฉ์–ด ์ •๋ฆฌ

IEEE, 40Gbps ๋ฐ 100Gbps ์ด๋”๋„ท ์‚ฌ์–‘ ์ดˆ์•ˆ ์ž‘์„ฑํ‚ค๋กœ

๊ฒŒ์žฌ : EEKOREA, 2007๋…„ 09์›” 10์ผ by ๋ฆญ๋ฉ”๋ฆฌํŠธ

IEEE๋Š” 40Gbps ๋ฐ 100Gbps ์ด๋”๋„ท ๋„คํŠธ์›Œํ‚น ๋ชจ๋‘์— ๋Œ€ํ•œ ๋กœ๋“œ๋งต ์ž‘์„ฑ์ด๋ผ๋Š” ํ•ฉ์˜๊ฐ€ ๋„์ถœ๋จ์— ๋”ฐ๋ผ, ๋‚ด๋…„ 3์›” ๋‘ ๊ฐ€์ง€ ์ „์†ก์†๋„์˜ ์‚ฌ์–‘ ์ž‘์„ฑ์„ ์œ„ํ•œ ๊ณต์‹์ ์ธ ์‹ค๋ฌดํŒ€์„ ์ถœ๋ฒ”์‹œํ‚ฌ ์˜ˆ์ •์ด๋‹ค.

IEEE Higher Speed Study Group์€ ์ง€๋‚œ 7์›” ๊ฐœ์ตœ๋œ ํšŒ์˜์—์„œ 40Gbps ๋ฐ 100Gbps ์ด๋”๋„ท ์‚ฌ์–‘ ์ดˆ์•ˆ ์ž‘์„ฑ์— ๋Œ€ํ•œ ์ง€์ง€๋ฅผ ๊ฒฐ์˜ํ–ˆ๋‹ค. 100Gbps ์‚ฌ์–‘์€ ๋‹จ์ผ๋ชจ๋“œ ๊ด‘์„ฌ์œ ์—์„œ 40km ๋ฐ 10km, ๋‹ค์ค‘๋ชจ๋“œ ๊ด‘์„ฌ์œ ์—์„œ 100m, ๊ทธ๋ฆฌ๊ณ  ๊ตฌ๋ฆฌ ์ผ€์ด๋ธ”์—์„œ 10m์— ํ•ด๋‹นํ•˜๋Š” ๋ฒ„์ „๋“ค์„ ํฌํ•จํ•˜๊ฒŒ ๋œ๋‹ค.

ํ•œํŽธ, ์ฃผ๋กœ ๋ฐ์ดํ„ฐ ์„ผํ„ฐ ์žฅ๋น„ ์ œ์กฐ์—…์ฒด๋“ค๋กœ๋ถ€ํ„ฐ ์ง€์ง€๋ฅผ ์–ป๊ณ  ์žˆ๋Š” 40Gbps ์ œ์•ˆ์€ ์ด๋ฒˆ ๊ฒฐ์ •์—์„œ ๋ณด๋‹ค ๋…ผ์Ÿ์ ์ธ ๋ถ€๋ถ„์„ ์ฐจ์ง€ํ–ˆ๋‹ค. ์ตœ๊ทผ ๋“ค์–ด์„œ ํ…”๋ ˆ์ฝค ๋ฐ ๋„คํŠธ์›Œํ‚น ์—”์ง€๋‹ˆ์–ด๋“ค์€ 40Gbps ์ด๋”๋„ท์„ ๋†“๊ณ  ์ด๊ฒฌ์„ ๋ณด์ด๊ณ  ์žˆ๋‹ค.

์ถฉ๋Œํ•˜๋Š” ์ œ์•ˆ๋“ค

ํ…”๋ ˆ์ฝค ์—”์ง€๋‹ˆ์–ด๋“ค์€ ๊ทธ ์†๋„๊ฐ€ ์ตœ์†Œ ์ง€๋‚œ 6๋…„ ์ „๋ถ€ํ„ฐ ๋ฐฐ์น˜๋˜๊ณ  ์žˆ๋Š” ํ˜„์žฌ์˜ 40Gbps ๊ด‘ ์ „์†ก ๋„คํŠธ์›Œํฌ์™€ ํ˜ธํ™˜๋  ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค๊ณ  ์ฃผ์žฅํ–ˆ๋‹ค. Alcatel-Lucent์‚ฌ์˜ ํ‘œ์ค€ ์ „๋ฌธ๊ฐ€์ธ Steve Trowbridge ์”จ๋Š” ์ด๋Ÿฌํ•œ ๋ชฉ์ ์œผ๋กœ ์ œ์•ˆ์„œ๋ฅผ ์ž‘์„ฑํ–ˆ๊ณ , ์ ์–ด๋„ ์ผ๊ณฑ ๊ณณ ์ด์ƒ์˜ ์—…์ฒด๋“ค๋กœ๋ถ€ํ„ฐ ์ง€์ง€๋ฅผ ์–ป์—ˆ๋‹ค.

์ด ์ œ์•ˆ์—๋Š” 40Gbit ํ‘œ์ค€์ด ํ˜„์žฌ์˜ 10GbE์— ์‚ฌ์šฉ๋˜๋Š” 64/66bit ์ธ์ฝ”๋”ฉ ๊ณ„ํš์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์ด ํ•จ์ถ•๋˜์–ด์žˆ๋‹ค. ๊ด‘ ๋„คํŠธ์›Œํฌ๋กœ ์ง์ ‘ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์†กํ•  ๋งŒํผ ์ถฉ๋ถ„ํ•œ ํŽ˜์ด๋กœ๋“œ๋ฅผ ์ œ๊ณตํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. Alcatel์‚ฌ์˜ ์ œ์•ˆ์€ 512/513bit ์ธ์ฝ”๋”ฉ์„ ํ•˜๋‚˜์˜ ๊ฐ€๋Šฅํ•œ ์ ‘๊ทผ๋ฒ•์œผ๋กœ ์„ค์ •ํ–ˆ๋‹ค.

“10G์—์„œ ์šฐ๋ฆฌ๋Š” ๋‘ ๊ฐ€์ง€ ํ‘œ์ค€๊ณผ ๋„ค ๊ฐ€์ง€ ๋…์ ์ ์ธ ์ ‘๊ทผ๋ฒ•์„ ๊ฐ–๊ฒŒ ๋˜์—ˆ๋‹ค. ์ด๋“ค ์ค‘ ๋‹ค์ˆ˜๊ฐ€ ๋„คํŠธ์›Œํฌ ๋  ์ˆ˜ ์—†๋Š” ์ ๋Œ€์  ์—ฐ๊ฒฐ์ด์—ˆ๋‹ค”๊ณ  Trowbridge ์”จ๋Š” ๋งํ–ˆ๋‹ค. “๊ฐ๊ฐ์˜ ๊ณ ๊ฐ๋“ค์€ ์„œ๋กœ ๋‹ค๋ฅธ ์†”๋ฃจ์…˜๋“ค์„ ์„ ํƒํ–ˆ์œผ๋ฉฐ, ์ด๋“ค ์ค‘ ๋ช‡๋ช‡์€ ์„œ๋กœ ์ž˜ ์—ฐ๋™ํ•˜์ง€ ์•Š์•˜๋‹ค. ์ด๋กœ ์ธํ•ด ๋„ˆ๋ฌด๋‚˜๋„ ๋งŽ์€ ์‹œ์Šคํ…œ์ด ๊ตฌ์ถ•๋˜์—ˆ๋‹ค”๊ณ  ๊ทธ๋Š” ์„ค๋ช…ํ–ˆ๋‹ค. “์‚ฌ๋žŒ๋“ค์€ ์ด๋Ÿฌํ•œ ๊ฒฝํ—˜์ด ๋ฐ˜๋ณต๋˜๊ธฐ๋ฅผ ์›์น˜ ์•Š๋Š”๋‹ค.”

๋ฐ˜๋ฉด ๋„คํŠธ์›Œํ‚น ์—”์ง€๋‹ˆ์–ด๋“ค์€ 40Gbps ์Šคํ”ผ๋“œ ๊ทธ๋ ˆ์ด๋“œ๋ฅผ ์ „ํ˜€ ์›ํ•˜์ง€ ์•Š์•˜๋‹ค. ์˜คํžˆ๋ ค ๊ด‘ ์ „์†ก ๋„คํŠธ์›Œํฌ๋กœ ์ง์ ‘ ์—ฐ๊ฒฐํ•  ํ•„์š”๊ฐ€ ์—†๋Š” ์ƒ๋‹น์ˆ˜์˜ ๊ณ ์† ๋ผ์šฐํ„ฐ ๋ฐ ์Šค์œ„์น˜ ์‹œ์žฅ์„ ๋ถ„์—ด์‹œํ‚ฌ ๊ฒƒ์— ๋Œ€ํ•ด ์šฐ๋ คํ–ˆ๋‹ค.

“๋™์ผํ•œ ๋ถ„์•ผ์—์„œ ๋‘ ๊ฐ€์ง€ ์†”๋ฃจ์…˜์„ ๊ฐ–๊ฒŒ ๋  ๊ฒƒ์ด๋‹ค. ์ด๋Š” ๋ณธ์งˆ์ ์œผ๋กœ ์‹œ์žฅ์„ ๋ถ„ํ• ํ•˜๊ฒŒ ๋  ๊ฒƒ์ด๋ฉฐ, ๊ทธ๋‹ค์ง€ ๋ชธ์ง‘์ด ์ปค์ง€์ง€ ์•Š์„ ์ „์ฒด ์‹œ์žฅ์—์„œ ๋‘ ๊ฐœ์˜ ํŠธ๋žœ์‹œ๋ฒ„ ๋ฐ ๋‘ ๊ฐœ์˜ ํšŒ์„  ์นด๋“œ๋ฅผ ๊ตฌ์ถ•ํ•ด์•ผ ํ•  ๊ฒƒ”์ด๋ผ๊ณ  ์ต๋ช…์„ ์š”๊ตฌํ•œ ํ•œ ๋„คํŠธ์›Œํ‚น ์—”์ง€๋‹ˆ์–ด๋Š” ๋งํ–ˆ๋‹ค.

ํ•ฉ์˜ ๋„์ถœ

๊ฒฐ๊ตญ์—๋Š” ๋ชจ๋“  ์ดํ•ด๋‹น์‚ฌ์ž๋“ค์ด 40Gbps ํ‘œ์ค€์˜ ์„ธ ๊ฐ€์ง€ ์ข…๋ฅ˜๋ฅผ ์ง€์›ํ‚ค๋กœ ํ•ฉ์˜ํ–ˆ๋‹ค. ๋ฉ€ํ‹ฐ๋ชจ๋“œ ๊ด‘์„ฌ์œ ์—์„œ 100m, ๊ตฌ๋ฆฌ ์ผ€์ด๋ธ”์—์„œ 10m, ๊ทธ๋ฆฌ๊ณ  ๋ฐฑํ”Œ๋ ˆ์ธ์—์„œ 1m ํ™•์žฅ ๋ฒ„์ „์ด ํฌํ•จ๋œ๋‹ค. IEEE์˜ ์ž๋ฃŒ์—๋Š” 40Gbps ์‚ฌ์–‘์ด “๊ด‘ ์ „์†ก ๋„คํŠธ์›Œํฌ์— ์ ์ ˆํ•œ ์ง€์›์„ ์ œ๊ณตํ•  ๊ฒƒ”์ด๋ผ๊ณ  ์ ํ˜€์žˆ๋‹ค.

“์ด๋Ÿฌํ•œ ํ‘œํ˜„์€ ๋ชฉํ‘œ ๋‹ฌ์„ฑ์„ ์œ„ํ•œ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•๋“ค์— ๋Œ€ํ•ด์„œ ์—ฌ์ง€๋ฅผ ๋‚จ๊ฒจ ๋‘๋Š” ๊ฒƒ”์ด๋ผ๊ณ  Intel์‚ฌ์˜ ์ด๋”๋„ท ์ œํ’ˆ ๋ถ€๋ฌธ ์ „๋žต ๋งˆ์ผ€ํŒ… ๋งค๋‹ˆ์ €์ธ Robert Hays ์”จ๋Š” ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” 40Gbps ์ œ์•ˆ์˜ ์ง€์ง€์ž ์ค‘ ํ•œ ์‚ฌ๋žŒ์ด๋‹ค. “์ด๋ฅผ ์ถ”์ง„ํ•ด ๋‚˜๊ฐ€๋Š” ๋ฐ ์žˆ์–ด์„œ ์‚ฌ๋žŒ๋“ค์ด ๊ฐ–๊ณ  ์žˆ๋Š” ๊ทผ์‹ฌ์‚ฌํ•ญ ์ค‘ ํ•œ ๊ฐ€์ง€๋Š” 40Gbps ์ œ์•ˆ์ด ๋„ˆ๋ฌด ์ œํ•œ์ ์ผ ๊ฒƒ์ด๋ผ๋Š” ์ ์ด๋‹ค”๋ผ๊ณ  ๊ทธ๋Š” ๋ง๋ถ™์˜€๋‹ค.

์ง€๋‚œ 7์›” ํšŒ์˜ ์ค‘ ํ•จ๊ป˜ ์ง„ํ–‰๋œ ํ† ๋ก ํšŒ์—์„œ ์—”์ง€๋‹ˆ์–ด๋“ค์€ ์–ด๋Š ์—…์ฒด๋“ค์˜ ๋‹จ์ฒด๋ผ๋„ 40Gbps ์ด๋”๋„ท๊ณผ ๊ด‘ ์ „์†ก ๋„คํŠธ์›Œํฌ์™€์˜ ํ˜ธํ™˜์„ฑ์„ ํ™•์‹คํžˆ ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š” ์ด์ƒ์˜ ์ง์„ ์ง€์ง€๋Š” ์•Š์„ ๊ฒƒ์ด๋ผ๋Š” ์ ์„ ๊ทธ๋“ค ์Šค์Šค๋กœ๊ฐ€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค๊ณ  Hays ์”จ๋Š” ๋งํ–ˆ๋‹ค. “๋Œ€๋‹ค์ˆ˜์˜ ์š”๊ตฌ์— ์ถฉ์กฑํ•˜๋Š” ํ•ฉ์˜์— ๋„๋‹ฌํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์ด ๋งŒ์กฑํ•˜๊ณ  ์žˆ๋‹ค๊ณ  ์ƒ๊ฐํ•œ๋‹ค”๊ณ  ๊ทธ๋Š” ๋ง๋ถ™์˜€๋‹ค.

ํŠนํžˆ, ์—”์ง€๋‹ˆ์–ด๋“ค์€ 40GbE ๋ฐ 100GbE ์ œํ’ˆ์— ๋Œ€ํ•œ ์„œ๋กœ ๋‹ค๋ฅธ ์‹œ์žฅ ์—ญํ• ์„ ๊ต์œกํ•˜๊ธฐ ์œ„ํ•ด ์ด๋”๋„ท ์—ฐํ•ฉ(Ethernet Alliance) ๋“ฑ๊ณผ ๊ฐ™์€ ๋‹จ์ฒด๋“ค๊ณผ์˜ ํ˜‘๋ ฅ์— ๋Œ€ํ•ด ๋น„๊ณต์‹์ ์œผ๋กœ๋‚˜๋งˆ ํ•ฉ์˜ํ–ˆ๋‹ค.

40GbE ์†๋„๋Š” ์ฃผ๋กœ ์„œ๋ฒ„ ๋ฐ ์Šค์œ„์น˜ ๋‚ด ํ˜น์€ ์„œ๋ฒ„ ๋ฐ ์Šค์œ„์น˜ ์‚ฌ์ด์—์„œ ์‚ฌ์šฉ๋  ๊ฒƒ์ด๋ฉฐ, 2011๋…„๋„๋‚˜ ๊ทธ ์ดํ›„์— ๊ฐ€์„œ์•ผ ๊ทธ ํ•„์š”์„ฑ์„ ์ธ์ • ๋ฐ›์„ ๊ฒƒ์œผ๋กœ ๋ณด์ธ๋‹ค. 100GbE ์ œํ’ˆ์€ ์ฃผ๋กœ ๋ฐ์ดํ„ฐ ์„ผํ„ฐ์™€ ๋ฐฑ๋ณธ ๋„คํŠธ์›Œํฌ์˜ ์Šค์œ„์น˜๋“ค์„ ์—ฐ๊ฒฐํ•˜๊ฒŒ ๋˜๊ณ  ๊ทธ ํ•„์š”์„ฑ์€ ๋ณด๋‹ค ๋นจ๋ฆฌ ์š”๊ตฌ๋  ๊ฒƒ์ด๋ผ๊ณ  ํ˜„์—ญ ์ปจ์„คํ„ดํŠธ์ด์ž HP์‚ฌ์˜ ๋„คํŠธ์›Œํ‚น ๋ถ€๋ฌธ์—์„œ ์˜ค๋žซ๋™์•ˆ ๊ทผ๋ฌดํ–ˆ๋˜ Dan Dove ์”จ๋Š” ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” IEEE์˜ ๋…ธ๋ ฅ์— ์ ๊ทน์ ์œผ๋กœ ์ฐธ์—ฌํ–ˆ๋‹ค.

Dove ์”จ์— ๋”ฐ๋ฅด๋ฉด, ๋„คํŠธ์›Œํ‚น ์—”์ง€๋‹ˆ์–ด๋“ค์€ ์นฉ ์ œ์กฐ์—…์ฒด๋“ค์ด ๋‘ ๊ฐ€์ง€ ์†๋„ ๋ชจ๋‘๋ฅผ ์ง€์›ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐ˜๋„์ฒด๋ฅผ ์„ค๊ณ„ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ์ข… ์‚ฌ์–‘์„ ์ž‘์„ฑํ•  ๊ฒƒ์ด๋‹ค. “๋ฐ˜๋„์ฒด๋ฅผ ๋‘ ๋ฐฐ๋กœ ๋Š˜๋ฆฌ๊ณ , ๋งŽ์€ ์‹œ์Šคํ…œ๋“ค์˜ ์ˆ˜๋ฅผ ๋ฐฐ๊ฐ€ ์‹œํ‚ค๋Š” ์ƒํ™ฉ์— ์ง๋ฉดํ•˜๋Š” ๊ฒƒ์„ ์›์น˜ ์•Š๋Š”๋‹ค”๊ณ  Dove ์”จ๋Š” ๋งํ–ˆ๋‹ค. ๋˜ํ•œ ํ•˜๋‚˜์˜ ๋‹จ์ฒด๊ฐ€ ๋‘ ๊ฐ€์ง€ ์‚ฌ์–‘ ๋ชจ๋‘๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์ด ๋‘ ๊ฐ€์ง€ ์ „์†ก ์†๋„์—์„œ ๊ณตํ†ต ๊ธฐ์ˆ ์„ ์ถ”์ง„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ๊ฒƒ์ด๋ผ๊ณ  ๋ง๋ถ™์˜€๋‹ค.

์—ฌ์ „ํžˆ ํ’€์ง€ ๋ชปํ•œ ์–ด๋ ค์šด ๋ฌธ์ œ๋Š” ์ธ์ฝ”๋”ฉ ๊ณ„ํš์ด๋‹ค. ์ด๋”๋„ท์€ 64/66bit ์ธ์ฝ”๋”ฉ์„ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์ง€๋งŒ ํ…”๋ ˆ์ฝค ์—…์ฒด๋“ค์€ 512/513bit ๋“ฑ์˜ ์ ‘๊ทผ๋ฒ•์„ ์ œ์•ˆํ•˜๊ณ  ์žˆ๋‹ค. ํ˜„์žฌ ์—”์ง€๋‹ˆ์–ด๋“ค์€ ์ด ์‚ฌ์–‘์˜ ๋Œ€๋ถ€๋ถ„์—์„œ, ํŠนํžˆ ๋‹จ๊ฑฐ๋ฆฌ ์š”๊ตฌ์‚ฌํ•ญ์„ ์ถฉ์กฑ์‹œํ‚ค๋Š” ๋ฐ ์žˆ์–ด์„œ๋Š” 10GbE์˜ ๋ณต์ˆ˜ ์ฑ„๋„์„ ์‚ฌ์šฉํ•  ๊ฒƒ์ด๋ผ๊ณ  ๋ณด๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ณ‘๋ ฌ์ด๋ƒ ์ง๋ ฌ์ด๋ƒ๋ฅผ ๋†“๊ณ  ๋ฒŒ์–ด์ง„ ๊ตฌํ˜„์ƒ์˜ ๋ฌธ์ œ๋Š” ์—ฌ์ „ํžˆ ํ•ด๊ฒฐ๋˜์ง€ ์•Š๊ณ  ์žˆ๋‹ค.

ํ‘œ์ค€์„ ์ •๋ฆฝํ•˜๊ธฐ ์œ„ํ•œ ๋…ธ๋ ฅ์ด ๋ณธ ๊ถค๋„๋ฅผ ์œ ์ง€ํ•œ๋‹ค๋ฉด, 40Gbit ๋ฐ 100Gbit์˜ ์ตœ์ข… ์‚ฌ์–‘์€ 2010๋…„ 6์›” ์ด์ „์—๋Š” ์™„์„ฑ๋  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋‹ค. ํ‘œ์ค€์˜ ์ดˆ์•ˆ์€ 2009๋…„ ์ดˆ๊นŒ์ง€๋Š” ์ •๋ฆฝ๋  ๊ฒƒ์ด๋ฉฐ, 2009๋…„ ๋ง์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์ƒ์šฉ ์ œํ’ˆ์ด ๋ฐœํ‘œ๋  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋‹ค.

๊ทธ๋Ÿฌ๋‚˜ IEEE๋Š” 40GbE ๋ฐ 100GbE ์‚ฌ์–‘์„ ๋ชจ๋‘ ์ž‘์„ฑํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ๊ทธ ์ผ์ •์€ ๋Šฆ์ถฐ์งˆ ์ˆ˜ ์žˆ๋‹ค๊ณ  Dove ์”จ๋Š” ๋งํ–ˆ๋‹ค.

Intel ๋ฐ Broadcom์‚ฌ๋Š” 40Gbps ์ด๋”๋„ท ๋„คํŠธ์›Œํ‚น์˜ ์ง€์ง€๋ฅผ ํ˜ธ์†Œํ•˜๋Š” ๋กœ๋น„๋ฅผ ํŽผ์ณค๋‹ค.

Intel ๋ฐ Broadcom์‚ฌ๋Š” 40Gbps ์ด๋”๋„ท ๋„คํŠธ์›Œํ‚น์˜ ์ง€์ง€๋ฅผ ํ˜ธ์†Œํ•˜๋Š” ๋กœ๋น„๋ฅผ ํŽผ์ณค๋‹ค.

Sep 20, 2007

์ธํ…”, USB 3.0 ํ”„๋กœ๋ชจํ„ฐ ๊ทธ๋ฃน ๊ฒฐ์„ฑํ•˜๋‹ค

์ธํ…”์„ ๋น„๋กฏํ•œ ์—ฌ๋Ÿฌ ์„ ๋‘ ๊ธฐ์—…๋“ค์ด ํ˜„์žฌ ์ „์†ก ์†๋„๋ณด๋‹ค 10๋ฐฐ ๋น ๋ฅธ 5Gbps์˜ ์ดˆ๊ณ ์† USB ๊ฐœ๋ฐœ์„ ์œ„ํ•ด USB 3.0 ํ”„๋กœ๋ชจํ„ฐ ๊ทธ๋ฃน(Promoter Group)์„ ๊ฒฐ์„ฑํ–ˆ๋‹ค๊ณ  ๋ฐœํ‘œํ–ˆ๋‹ค.

์ธํ…”์ด HP, NEC ์ฝ”ํฌ๋ ˆ์ด์…˜, NXP ์„ธ๋ฏธ์ปจ๋•ํ„ฐ, ํ…์‚ฌ์Šค ์ธ์ŠคํŠธ๋ฃจ๋จผ์ธ ์™€์˜ ํ˜‘๋ ฅ์„ ํ†ตํ•ด ๊ฐœ๋ฐœํ•œ ์ด ๊ธฐ์ˆ ์€ ๋””์ง€ํ„ธ ๋ฏธ๋””์–ด์˜ ์œ ๋น„์ฟผํ„ฐ์Šคํ™”๊ฐ€ ์ด๋ฃจ์–ด์ง€๊ณ  ํŒŒ์ผ ํฌ๊ธฐ๊ฐ€ ์ตœ๋Œ€ 25GB ์ด์ƒ์œผ๋กœ ์ฆ๊ฐ€๋˜๋ฉด์„œ ๊ทธ ํ•„์š”์„ฑ์ด ์ปค์ง„ PC์™€ ์ปจ์Šˆ๋จธ, ๋ชจ๋ฐ”์ผ ๋ถ€๋ฌธ์˜ ๊ณ ์† ์‹ฑํฌ์•ค๊ณ (sync-and-go) ์ „์†ก ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์— ์‚ฌ์šฉ๋œ๋‹ค.

USB(Universal Serial Bus) 3.0์€ ์ด์ „ USB ๊ธฐ์ˆ ๋“ค๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ฐ„ํŽธํ•œ ์‚ฌ์šฉ๋ฒ•๊ณผ ํ”Œ๋Ÿฌ๊ทธ์•คํ”Œ๋ ˆ์ด(plug and play) ๊ธฐ๋Šฅ์„ ํŠน์ง•์œผ๋กœ ํ•  ๋ฟ ์•„๋‹ˆ๋ผ ํ•˜์œ„ ํ˜ธํ™˜์„ฑ์„ ์ง€๋‹Œ ํ‘œ์ค€์„ ์„ธ์šฐ๊ฒŒ ๋œ๋‹ค.

10๋ฐฐ ์ด์ƒ์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ์„ ๋ชฉํ‘œ๋กœ ํ•˜๋Š” ์ด ๊ธฐ์ˆ ์€ ์œ ์„  USB์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ๋‹ค. ์ด ๋ฐ–์—๋„, USB 3.0 ๊ทœ๊ฒฉ์€ ์ „๋ ฅ ์†Œ๋ชจ๋Ÿ‰ ๊ฐ์†Œ ๋ฐ ํ”„๋กœํ† ์ฝœ ํšจ์œจ์„ฑ ์ฆ์ง„์„ ์œ„ํ•ด ์ตœ์ ํ™”๋œ๋‹ค. USB 3.0 ํฌํŠธ์™€ ์ผ€์ด๋ธ”๋ง์€ ๊ด‘ํ•™์šฉ ๊ธฐ๋Šฅ์„ ์œ„ํ•œ ์ƒ์œ„ ๋ฐ ํ•˜์œ„ ํ˜ธํ™˜์ด ๊ฐ€๋Šฅํ•˜๋„๋ก ๊ณ ์•ˆ๋  ๊ฒƒ์ด๋‹ค.

์ธํ…” ๊ธฐ์ˆ  ์ „๋žต๊ฐ€์ด์ž USB ์ž„ํ”Œ๋ฆฌ๋ฉ˜ํ„ฐ ํฌ๋Ÿผ(Implementers Forum, USB-IF) ์˜์žฅ์ธ ์ œํ”„ ๋ผ๋ฒคํฌ๋ž˜ํ”„ํŠธ(Jeff Ravencraft)๋Š” “USB 3.0์€ ๊ฐ€์žฅ ๋งŽ์ด ์‚ฌ์šฉ๋˜๋Š” PC ์œ ์„  ์ ‘์† ๋ฐฉ๋ฒ•์˜ ๋‹ค์Œ ๋‹จ๊ณ„์ด๋‹ค.”๋ผ๋ฉฐ, “๋””์ง€ํ„ธ ์‹œ๋Œ€์—๋Š” ์ผ์ƒ ์ƒํ™œ์—์„œ ํ”ํ•˜๊ฒŒ ์‚ฌ์šฉ๋˜๋Š” ์—„์ฒญ๋‚œ ์–‘์˜ ๋””์ง€ํ„ธ ์ปจํ…์ธ ๋ฅผ ์ „์†กํ•˜๊ธฐ ์œ„ํ•œ ๊ณ ์† ์„ฑ๋Šฅ ๋ฐ ์‹ ๋ขฐ์„ฑ ๋†’์€ ์ ‘์† ๋ฐฉ๋ฒ•์ด ํ•„์š”ํ•˜๋‹ค.

USB 3.0์€ ์†Œ๋น„์ž๋“ค์ด USB ๊ธฐ์ˆ ์„ ์„ ํ˜ธํ•˜๊ฒŒ ๋œ ์ด์œ ์ด์ž ์ง€์†์ ์œผ๋กœ ๊ธฐ๋Œ€๊ฐ์„ ๊ฐ–๋Š” ํŠน์ง•์ธ ‘์†์‰ฌ์šด ์ด์šฉ’์€ ๊ทธ๋Œ€๋กœ ์œ ์ง€ํ•˜๋ฉด์„œ ์‹ ๋ขฐ์„ฑ ๋†’์€ ์ ‘์† ๋ฐฉ๋ฒ•์ด๋ผ๋Š” ์š”๊ตฌ ์‚ฌํ•ญ์„ ํ•ด๊ฒฐํ•ด ์ค„ ๊ฒƒ์ด๋‹ค.

์ธํ…”์€ USB ์ž„ํ”Œ๋ฆฌ๋ฉ˜ํ„ฐ ํฌ๋Ÿผ์ด USB 3.0 ๊ทœ๊ฒฉ์— ๋Œ€ํ•œ ์—…๊ณ„ ๋Œ€ํ‘œ ๋‹จ์ฒด์˜ ์—ญํ• ์„ ํ•  ๊ฒƒ์ด๋ผ๋Š” ์ดํ•ด ํ•˜์— USB 3.0 ํ”„๋กœ๋ชจํ„ฐ ๊ทธ๋ฃน์„ ์ฐฝ์„คํ–ˆ๋‹ค. USB 3.0 ๊ทœ๊ฒฉ์˜ ์™„์„ฑ๋ณธ์€ 2008๋…„ ์ƒ๋ฐ˜๊ธฐ์— ๋ฐœํ‘œ๋  ์˜ˆ์ •์ด๋‹ค. ๋˜ํ•œ USB 3.0์€ ์ดˆ๋ฐ˜์— ๋…๋ฆฝ ์‹ค๋ฆฌ์ฝ˜์˜ ํ˜•ํƒœ๋ฅผ ๋ ๊ฒŒ ๋  ๊ฒƒ์ด๋‹ค.

USB 3.0 ํ”„๋กœ๋ชจํ„ฐ ๊ทธ๋ฃน์€ ๊ธฐ์กด์˜ USB ์žฅ๋น„๋ฅผ ์œ„ํ•œ ํด๋ž˜์Šค ๋“œ๋ผ์ด๋ฒ„์˜ ์ธํ”„๋ผ ๊ตฌ์ถ•๊ณผ ํˆฌ์ž, ๋™์ผํ•œ ๋ฃฉ์•คํ•„(look-and-feel), ๊ฐ„ํŽธํ•œ ์ด์šฉ๋ฒ•์„ ๊ทธ๋Œ€๋กœ ์œ ์ง€ํ•˜๋ฉด์„œ ์ด ์œ„๋Œ€ํ•œ ๊ธฐ์ˆ ์˜ ์„ฑ๋Šฅ์„ ์ง€์†์ ์œผ๋กœ ๊ฐ•ํ™”์‹œํ‚ค๋ ค๊ณ  ๋…ธ๋ ฅํ•˜๊ณ  ์žˆ๋‹ค.

Industry Leaders Develop Superspeed USB Interconnect

Popular USB Computer Connection Technology Expands performance with Proposed USB 3.0 Specification

INTEL DEVELOPER FORUM, San Francisco, Sept. 18, 2007 ?Intel Corporation and other industry leaders have formed the USB 3.0 Promoter Group to create a superspeed personal USB interconnect that can deliver over 10 times the speed of today's connection. The technology, also developed by HP, Microsoft Corporation, NEC Corporation, NXP Semiconductors and Texas Instruments Incorporated, will target fast sync-and-go transfer applications in the PC, consumer and mobile segments that are necessary as digital media become ubiquitous and file sizes increase up to and beyond 25 Gigabytes.

USB (Universal Serial Bus) 3.0 will create a backward-compatible standard with the same ease-of-use and plug and play capabilities of previous USB technologies. Targeting over 10x performance increase, the technology will draw from the same architecture of wired USB. In addition, the USB 3.0 specification will be optimized for low power and improved protocol efficiency. USB 3.0 ports and cabling will be designed to enable backward compatibility as well as future-proofing for optical capabilities.

"USB 3.0 is the next logical step for the PC's most popular wired connectivity," said Jeff Ravencraft, technology strategist with Intel and president of the USB Implementers Forum (USB-IF). "The digital era requires high-speed performance and reliable connectivity to move the enormous amounts of digital content now present in everyday life. USB 3.0 will meet this challenge while maintaining the ease-of-use experience that users have come to love and expect from any USB technology."


Intel formed the USB 3.0 Promoter Group with the understanding that the USB-IF would act as the trade association for the USB 3.0 specification. A completed USB 3.0 specification is expected by the first half of 2008. USB 3.0 implementations will initially be in the form of discrete silicon.

The USB 3.0 Promoter Group is committed to preserving the existing USB device class driver infrastructure and investment, look-and-feel and ease-of-use of USB while continuing to expand this great technology's capabilities.

About the USB-IF
The non-profit USB Implementers Forum, Inc. was formed to provide a support organization and forum for the advancement and adoption of USB technology. The USB-IF facilitates the development of high-quality compatible USB devices, through its logo and compliance program and promotes the benefits of USB and the quality of products that have passed compliance testing. Further information, including postings of the most recent product and technology announcements, is available by visiting the USB-IF Web site at www.usb.org.

Participating Company Press Quotes:
"HP's commitment to providing customers with a reliable method for connecting peripherals is evident through our support of both USB 2.0 and Wireless USB technologies," said Phil Schultz, vice president, Consumer Inkjet Solutions, HP. "Now, with USB 3.0, we're creating an even better experience for customers when connecting their printers, digital cameras or other peripheral devices to their PCs."

"Intel worked jointly with industry leaders in the development and adoption of two generations of USB, which has become the number one peripheral interface in computing and hand-held consumer electronic devices," said Patrick Gelsinger, senior vice president and general manager, Digital Enterprise Group, Intel Corporation. "As the market evolves to support customer demands for storing and moving larger amounts of digital content, we look forward to developing the third generation of USB technology that leverages the current USB interface and optimize it to meet these demands."
"NEC has been a supporter of USB technologies since the first installment of wired USB," said Katsuhiko Itagaki, general manager, SoC Systems Division, NEC Electronics Corporation. "Now it's time to evolve an already successful interface to meet market demands for moving large amounts of content at faster speeds to minimize users wait time."

"NXP is pleased to join other top-tier companies in advancing the number one interconnect technology in the world to meet the needs of next-generation peripherals," said Pierre-Yves Couteau, director of Strategy & Business Development, Business Line Connected Entertainment, NXP Semiconductors. "As a leading provider of USB semiconductor solutions, NXP is committed to drive the standardization and applications of Superspeed USB in the industry."

"With the proliferation of Hi-Speed USB in a wide number of market segments, including personal computing, consumer electronics, and mobility, we anticipate that USB 3.0 will rapidly become the de facto standard as the replacement of USB 2.0 ports in applications where higher bandwidth is valued," said Greg Hantak, vice president Worldwide ASIC at Texas Instruments Incorporated. "TI is excited about the new applications and improved user experience that will be enabled by the performance of USB 3.0."

For more news coverage out of IDF, visit the complete press kit at www.intel.com/pressroom/idf.

About Intel
Intel, the world leader in silicon innovation, develops technologies, products and initiatives to continually advance how people work and live. Additional information about Intel is available at www.intel.com/pressroom and blogs.intel.com.