Files
NE_YuR/network/tcpquiclab/README_LINUX.md
2026-01-10 10:54:46 +08:00

141 lines
3.8 KiB
Markdown

# Computer Network Experiment: TCP vs QUIC (Linux Guide)
This guide adapts the Windows-based experiment manual for a Linux environment.
## 1. Prerequisites
Ensure you have the following installed:
- `gcc` (Compiler)
- `quiche` library (Headers and Shared Object installed)
- `openssl` (For certificates)
- `tcpdump` or `wireshark` (For packet capture)
- `iproute2` (For `tc` traffic control)
## 2. Compilation
Compile all programs using the provided Makefile:
```bash
make
```
This will generate:
- `tcp_server`, `tcp_client` (Task 1)
- `quic_server`, `quic_client` (Task 2)
- `tcp_perf_server`, `tcp_perf_client` (Task 3 Performance)
- `quic_perf_server`, `quic_perf_client` (Task 3 Performance)
*Note: If `quiche` is not in the standard system path, edit the `Makefile` to point to the include/lib directories.*
## 3. Task 1: Basic TCP Client-Server
1. **Start the Server:**
```bash
./tcp_server
```
2. **Run the Client (in a new terminal):**
```bash
./tcp_client
```
**Expected Output:** The client sends "Hello...", server receives it and replies.
## 4. Task 2: Basic QUIC Client-Server
1. **Start the Server:**
```bash
./quic_server
```
2. **Run the Client (in a new terminal):**
```bash
./quic_client
```
**Expected Output:** QUIC handshake completes, client sends data on a stream, server echoes it back.
## 5. Task 3: Performance Analysis
### 3.1 Connection Establishment Time
1. Start capture on loopback:
```bash
sudo tcpdump -i lo -w handshake.pcap
```
*(Or use Wireshark on the `lo` interface)*
2. Run the TCP or QUIC client/server pairs again.
3. Open `handshake.pcap` in Wireshark to analyze the time difference between the first packet (SYN for TCP, Initial for QUIC) and the completion of the handshake.
### 3.2 Throughput Test (100MB Transfer)
**Baseline (Normal Network):**
1. Run TCP Perf Server: `./tcp_perf_server`
2. Run TCP Perf Client: `./tcp_perf_client`
3. Record the MB/s output.
4. Repeat for QUIC (`./quic_perf_server`, `./quic_perf_client`).
**Simulating Network Conditions (Packet Loss / Delay):**
We use Linux `tc` (Traffic Control) with `netem` instead of `clumsy`.
**Scenario A: 5% Packet Loss**
1. Apply 5% loss to the loopback interface:
```bash
sudo tc qdisc add dev lo root netem loss 5%
```
2. Run the perf tests again.
3. **Important:** Remove the rule after testing!
```bash
sudo tc qdisc del dev lo root
```
**Scenario B: 100ms Delay**
1. Apply 100ms delay:
```bash
sudo tc qdisc add dev lo root netem delay 100ms
```
2. Run the perf tests again.
3. Remove the rule:
```bash
sudo tc qdisc del dev lo root
```
### 3.3 Advanced Test: Multiplexing vs Multi-Connection
This task compares the performance of 5 parallel TCP connections against a single QUIC connection with 5 concurrent streams.
**Scenario 1: TCP Multi-Connection**
Establish 5 TCP connections simultaneously, each transferring 20MB (Total 100MB).
1. Start TCP Multi-Connection Server:
```bash
./tcp_multi_server
```
2. Run TCP Multi-Connection Client:
```bash
./tcp_multi_client
```
3. Record total time and throughput from the server output.
**Scenario 2: QUIC Single-Connection Multi-Streaming**
Establish 1 QUIC connection and open 5 streams concurrently, each transferring 20MB (Total 100MB).
1. Start QUIC Multi-Stream Server:
```bash
./quic_multi_server
```
2. Run QUIC Multi-Stream Client:
```bash
./quic_multi_client
```
3. Record the performance statistics.
**Analysis Points:**
- Compare completion times in a normal network.
- Use `tc` to simulate packet loss (e.g., 5%). Observe how QUIC's multiplexing avoids TCP's Head-of-Line (HoL) blocking, where a single lost packet in one TCP connection doesn't stall the other streams in QUIC.
### 3.4 Network Recovery