There was a time when NVIDIA’s SLI and AMD’s Crossfire technologies were positioned at the top of the gaming and high-performance computing community because of the benefits they provided.
These technologies made it possible to implement a multi-GPU configuration that allowed users to connect multiple graphics cards to increase the power of their PC and perform tasks or play games whose performance required a great effort from this hardware.
But are multi-GPU configurations still relevant today? Find out by reading this post.
These refer to NVIDIA (SLI) and AMD (Crossfire) technologies that gave users the possibility to connect up to four graphics cards to increase the performance of their PC. However, in order for these graphics cards to work together, it was necessary to consider aspects such as the motherboard, power supply, and compatibility with other hardware elements. Besides, specific software was required to verify the total capabilities of the cards.
Multi-GPU configurations reached their peak of popularity in mid-2010. So at that time, it was not uncommon to find someone who had a PC with four graphics cards installed to support the demands of high-end games. In the minds of the gaming community was installed this idea that if having one GPU was a good thing, surely two or three more were better. However, as time went by, the landscape for multi-GPU configurations became dark due to the following reasons:
All of this resulted in both NVIDIA and AMD making the decision to slow down on multi-GPU technologies. In the end, NVIDIA ended SLI support for its RTX 20 and 30 series of cards, while AMD did the same by eliminating Crossfire technology and replacing it with Infinity Cache technology.
Although multi-GPU configurations lost their popularity, they are still implemented in scenarios such as:
As sophisticated as it was, having a multi-GPU configuration implemented in the PC could potentially cause several bottlenecks to emerge that hurt overall system performance and efficiency.
As the number of GPUs installed in a PC increased, so did the communication load between these components. Under these conditions the risk of a bottleneck was high. This communication implied the need to transfer data between GPUs, which caused the latency to increase and the overall performance to drop.
One thing you should know about the GPU is that this component has its own memory. So in a multi-GPU configuration, mismanagement of bandwidth and total memory capacity could also make the bottleneck a reality. Hence, a drop in system performance would then occur, resulting in excessive demand for memory usage as well.
With a multi-GPU configuration came increased power consumption and heat generation. This situation could lead to cooling and power problems that eventually led to a bottleneck.
Another challenge was to ensure software and hardware compatibility between the graphics cards used in the multi-GPU configuration. Especially in situations where it was required to work with complex deep learning frameworks and libraries, which involved a considerable use of GPU resources. So that if the multi-GPU distribution installed on the PC fell insufficient in the resources that these processes demanded to be able to run, then an imbalance was produced that could lead to a bottleneck being formed.
Other aspects that could cause the multi-GPU bottleneck were interconnect and network. Specifically, this problem occurred due to bandwidth limitations and limitations in the links generated between GPUs. This also included PCIe bandwidth limitations and the low number of lanes available for data transfer between the CPU and the multi-GPU configuration.
The bottleneck was also caused by NUMA (Non-Uniform Memory Access) effects that arose from asymmetric bandwidth between local and remote GPU memory in multi-GPU systems. These interconnect and network bottlenecks in the multi-GPU configuration could also result in problems with data transfer rate, latency and overall network performance.
Resolving the bottlenecks that occurred within a multi-GPU configuration required the implementation of hardware and software strategies. Here are a few that may prove useful:
With the decline in the use of multi-GPU configurations, manufacturers began to work on developing alternatives in this area to optimize the gaming experience for users. Thus, the following options emerged:
Although multi-GPU configuration through SLI and Crossfire technologies are today an exception, it can’t be denying the impact they had on the market, making many people took advantage of their capabilities to boost their computer’s performance.
However, the various limitations and challenges that arose over time caused these technologies to fall into disuse and were replaced by more efficient and easier-to-use graphics card models that offer performance at the level of multi-GPU configuration. This is why nowadays for most gaming enthusiasts it’s more cost effective to invest in a single high-end GPU to achieve better performance and reliability, thus avoiding the headaches caused by multi-GPU configurations.