Older digital hardware enthusiasts know that in the early years, there was a graphics card interconnection technology, which Nvidia called "SLI", and AMD called it "Crossfire", commonly known as "multi-graphics card crossfire" technology.
This technology can interconnect two or more graphics cards together, which can significantly improve the performance of the graphics card, but it cannot achieve the effect of 1+1>2, only the effect of 1+1>1.
There is a lot of attention to when two or more graphics cards are interconnected, and only when these graphics cards are basically the same in terms of memory capacity and bandwidth, can they give full play to the best results. If the two are so different, it doesn't make sense to link these cards together.
Obviously, whether it makes sense to interconnect multiple graphics cards together depends on how much of a performance boost the interconnection brings relative to a single graphics card. If it is too small, say within 20%, then it is not significant, and only at least more than 30% is meaningful.
At that time, the game of multi-graphics card interconnection was once popular, and then gradually disappeared. Some friends may think that this is because the performance improvement brought by interconnection is not large, and it is very chicken, so it was eliminated, but this is not the case, this idea is wrong.
Recently, an overseas geek demonstrated this technology, specifically, he connected the Intel ARC A770 16GB graphics card with NVIDIA's Titan XP 12GB graphics card.
The Titan XP is an old graphics card that has been released for almost 7 years, and you must not think that the performance of this graphics card is already very backward, it is very dish, and its performance is roughly comparable to that of the Intel ARC A770.
In addition, the bandwidth of the ARC A770 16GB version graphics card is 560GB s, and the bandwidth of the Titan XP 12GB version graphics card is 548GB s.
So, how much performance can you get by interconnecting the ARC A770 16GB with Titan XP 12GB? This is the most concerning issue.
The answer is 70%. The overall graphics performance after interconnection is about 70% higher than that of each individual GPU, which means that the overall graphics performance after parallel connection is 170% of that of a single graphics card, which is a relatively large improvement, and the overall loss is only 30%.
The geek ran a test of complex calculations, which took about two hours, or about 120 minutes, if the calculations were done with only one graphics card. Calculated with a parallel graphics card, it took 1 hour and 27 minutes, about 90 minutes, and a conservative estimate saves about 30 minutes.
Judging from the test results, the performance of interconnecting two graphics cards together can be improved by 70%, which is very large and very practical, so why hasn't this technology been inherited and developed over the years?
One of the main reasons is that doing so makes it more difficult for game developers. Game developers have to add a lot of extra ** to make the game support multiple graphics cards, and this type of gamer base is relatively niche, and it is not cost-effective to spend this effort.
Another possible reason is that the volume of mid-to-high-end graphics cards is usually relatively large, which is not the same as it was back then. It's hard to fit two large graphics cards on a normal motherboard at the same time, and that's not even considering power and power consumption.
In addition to that, it is not very attractive to the average consumer user, especially the gamer. Because interconnecting graphics cards together does not give full play to the effect of 1+1>2, it is better to buy a graphics card with a performance of "2", the cost is lower than buying two graphics cards with a performance of "1", and it is more cost-effective.
To sum up, although the performance of multiple graphics cards can be significantly improved by interconnecting them together, it is difficult to popularize and become mainstream in the ordinary consumer field. Seeing this, you must not think that this technology has been or is about to be phased out, in the industrial and commercial fields, the situation is diametrically opposed.
For example, in data centers, supercomputers, and AI training clusters, it is very common to interconnect multiple graphics cards (more than tens of thousands). For this type of user, all the problems are not a problem, the power supply is not a problem, the power consumption is not a problem, the cost is not a problem, as long as the performance can be greatly improved, all the investment is worth it.
Therefore, multi-graphics card interconnection technology will not be eliminated and die, and its current application scenarios are mainly concentrated in certain professional fields, and it is expected to develop further in the future, and the overall life is still very nourishing.