PCIX confusion
EM1897 at aol.com
EM1897 at aol.com
Mon Mar 14 07:34:03 PST 2005
In a message dated 3/14/2005 4:07:00 AM Eastern Standard Time, Gabriel Ambuehl <gaml at xxxxxx> writes:
>Hi Boris Spirialitious,
>you wrote.
>
>BS> But what you say about pciE makes me think you
>BS> do not know about hardware either. only pcie x1
>BS> cards are available, which is not fast. What
>BS> pciE cards have you tested? Why would you buy
>BS> machine with pciE? Just because it new? It no
>BS> make sense what you say.
>
>
>Actually, seeing that PCIe is point to point, the bandwidth offered by
>even PCIe 1x is vastly superior to conventional PCI. And once you get
>to the 4x or even 8x range, much less 16x (which pushes more data than
>the FSB on many current CPU handles, IIRC), you outperform PCIX with ease.
>Electrically, the links are much easier to route which should lower
>board prices (no more 8 layer server boards) and increase stability,
>even.
>
>Lack of contention and lower latency on point to point links will also
>prove beneficial over the long term
Gabe, gabe, gabe. You have quite an imagination. As long as
your CPU has a dedicated path to each device and you don't
need to access stuff like say, memory, you are in!
The reality of today is x1 ethernet, so thats where any
discussion has to be. You also have to consider the MB
design and your requirements. One PCI-E MB that I have
used has the PCI-E bandwidth shared with the PCI-X bus.
So its a more complicated discussion than just PCI vs
PCI-X, which is just a slam dunk.
More information about the Users
mailing list