University of Illinois at Urbana Champaign, Adivsor: Dr. Brighten Godfrey
Besides all my research work, I also help to design and maintain the OCEAN networking testbed
University of Illinois at Urbana Champaign
Shanghai Jiao Tong University
I am currently a fourth-year PhD student working in the System and Networking group at UIUC. I feel lucky to have the chance to work with my awesome advisor Prof. Brighten Godfrey and some of my brilliant friends. Previously I got my B.S. degree in Electrical Engineering from Shanghai Jiao Tong University and worked in Institute of Wireless Communication under the supervise of Prof. Xinbing Wang.
My working experience and research interests spread among high performance and low latency data transfer architecture, SDN in enterprise networks, network virtualization, high performance data center fabric and Game Theory.
I am broadly interested in challenging problems in networking and systems. My main thread of research so far is to rethink the 30-year-old architecture of the possibly most-used Internet software (everyone connecting to Internet uses it), TCP, to achieve consistent high performance and low latency data delivery. Previously, I worked on high performance data center networks, software defined networks in enterprise networks and Game Theory. Most recently, I am looking at the distributed systems side of various blockchain related technologies.
Outside research, I design and build fun and useful systems. A few currently being used ones are listed here.
The TCP family has failed to achieve consistent high performance in face of the complex production networks: even special TCP variants are in many cases 10x away from optimal performance. We argue this is due to a fundamental architectural deficiency in TCP: hardwiring packet-level event to control responses without understanding the real performance result of its action.
Performance-oriented Congestion Control (PCC) is a new architecture that achieves consistent high performance even under challenging conditions. PCC senders continuously observe the connection between their actions and empirically experienced performance, enabling them to consistently adopt actions that result in high performance.
Interactive applications like web browsing are sensitive to latency. Unfortunately, TCP consumes significant time in its start-up phase and loss recovery. Halfback is a new short-flow transmission mechanism that operates on a better latency-safety trade-off point: Halfback achieves lower latency than the lowest latency previous solution and at the same time significantly better safety. As Halfback is TCP-friendly and requires only sender-side changes, it is feasible to deploy.
With scalable source routing mechanism, we propose a simple approach to realize the vision of a flexible, high-performance fabric: the network should expose every possible path, allowing a controller or edge device maximum choice.
We model the spectrum opportunity in a time-frequency division manner. This model caters to much more flexible requirements from secondary users (SUs) and has very clear application meaning. We solve the spectrum allocation problem by designing a combinatorial auction with truthfulness guarantee and computational efficiency.
To avoid potential congestion or data loss due to the overflow of some sensor nodes, we firstly design a novel bandwidth allocation mechanism, SWM, which can maximize the social utility, an indicator of every sensor node's satisfaction degree and the social fairness. Furthermore, we model the allocation process under the SWM as a noncooperative game and find out the unique Nash Equilibrium. The uniqueness of the equilibrium demonstrates that this network will actually approach to a fair and stable state.
Unpublished work. Out of sample 4 months backtest PL: 297% with 7.91 sharpe ratio. Live trading yields outstanding performance.
An iPhone app that can let you control the DJI Phantom 3 drone with voice command and your mind command using an EEG headset.
Interactive applications like web browsing are sensitive to latency. Unfortunately, TCP consumes significant time in its start-up phase and loss recovery. Existing sender-side optimizations use more aggressive start-up strategies to reduce latency, but at the same time they harm safety in the sense that they can damage co-existing flows’ performance and potentially the network’s overall ability to deliver data. In this paper, we experimentally compare existing solutions’ latency performance and more importantly, the trade-off between latency and safety at both the flow level and the application level. We argue that existing solutions are still operating away from the sweet spot on this trade-off plane. Based on the diagnosis of existing solutions, we introduce Halfback, a new short-flow transmission mechanism that operates on a better latency-safety trade-off point: Halfback achieves lower latency than the lowest latency previous solution and at the same time significantly better safety. As Halfback is TCP-friendly and requires only sender-side changes, it is feasible to deploy.
An emerging architecture for software-defined data centers and WANs is the network fabric, where complex applicationsensitive functions are factored out, leaving the network itself to provide a simple, robust high-performance data delivery abstraction. This requires performing route optimization, in real time and across a diverse choice of paths. A large variety of techniques have been proposed to provide path diversity for network fabrics. But, running up against the constraint of forwarding table size, these proposals are topology-dependent, complex, and still only provide limited path choice which (we show) can impact performance. We propose a simple approach to realize the vision of a flexible, high-performance fabric: the network should expose every possible path, allowing a controller or edge device maximum choice. To this end, we observe that source routing can be encoded and processed compactly into a single field, even in large networks, with OpenFlow 1.3. We show that, in addition to the expected decrease in required forwarding table size, source routing supports optimal throughput performance, in some cases significantly higher than some past proposals. We thus believe source routing offers a clean abstraction and efficient implementation for future network fabrics.
TCP and its variants have suffered from surprisingly poor performance for decades. We argue the TCP family has little hope of achieving consistent high performance due to a fundamental architectural deficiency: hardwiring packet-level events to control responses. We propose Performance-oriented Congestion Control (PCC), a new congestion control architecture in which each sender continuously observes the connection between its actions and empirically experienced performance, enabling it to consistently adopt actions that result in high performance. We prove that PCC converges to a stable and fair equilibrium. Across many real-world and challenging environments, PCC shows consistent and often 10× performance improvement, with better fairness and stability than TCP. PCC requires no router hardware support or new packet format.
Although a substantial amount of research has examined the constructs of warmth and competence, far less has examined how these constructs develop and what benefits may accrue when warmth and competence are cultivated. Yet there are positive consequences, both emotional and behavioral, that are likely to occur when brands hold perceptions of both. In this paper, we shed light on when and how warmth and competence are jointly promoted in brands, and why these reputations matter.
In this paper, we tackle the spectrum allocation problem in cognitive radio (CR) networks with time-frequency flexibility consideration using combinatorial auction. Different from all the previous works using auction mechanisms, we model the spectrum opportunity in a time-frequency division manner. This model caters to much more flexible requirements from secondary users (SUs) and has very clear application meaning. The additional flexibility also brings theoretical and computational difficulties. We model the spectrum allocation as a combinatorial auction and show that under the time-frequency flexible model, reaching the social welfare maximal is NP hard and the upper bound of worst-case approximation ratio is √m, m is the number of time-frequency slots. Therefore, we design an auction mechanism with near-optimal winner determination algorithm, whose worst-case approximation ratio reaches the upper bound √m. Further we devise a truthful payment scheme under the approximation winner determination algorithm to guarantee that all the bids submitted by SUs reflect their true valuation of the spectrum. To further address the issue and reach optimality, we simplify the general model to that only frequency flexibility is allowed, which is still useful, and propose a truthful, optimal and computationally efficient auction mechanism under modified model. Extensive simulation results show that all the proposed algorithms generate high social welfare as well as high spectrum utilization ratio. What's more, the actual approximation ratio of near-optimal algorithm is much higher than the worst-case approximation ratio.
In this paper, we deal with possible data transmission congestion on the sink node in wireless sensor networks (WSNs). We consider a scenario in which all the sensor nodes have a certain amount of storage space and acquire data from the surroundings at heterogeneous speed. Because receiving bandwidth of the sink node is limited, a proper bandwidth allocation mechanism should be implemented to avoid possible congestion or data loss due to the overflow of some sensor nodes. To address this problem, we firstly design a novel bandwidth allocation mechanism, SWM, that can maximize the social utility, an indicator of every sensor node's satisfaction degree and the social fairness. Furthermore, we model the allocation process under the SWM as a noncooperative game and figure out the unique Nash Equilibrium. The uniqueness of the equilibrium demonstrates that this network will actually approach to a fair and stable state.
Implemented first version of Traceflow backend, a tool to diagnose and analysze virtual network configuration errors for VMware's NSX network virtualization platform. Implementing Traceflow required understanding of most of NSX's complicated software layers and learning of a new declarative language nlog.
Tongqu.me help university students discover offline university related events resources, such as parties, training courses, academic talks and technology meetups and find people with the same interests.
I co-founded the company and worked on both the marketing side and technology (built a natural language process engine and QR code electric tickets system) side of the company. The website now is the main hub of student activity of my undergrad university.
I worked on interdisciplinary research of wireless networks and Game Theory, please click the research tab for detail.