Which is in the 2003 forum...
We may have hit a wall and overloaded the network that the Imagex Server is running on. Here is the specs from my servers thread:
- Intel S5000PSL
- 2x Intel Xeon 2.4GHz Quad-Core CPUs
- 16GB Fully-Buffered RAM
- 80GB SATA RAID1 (onboard)
- 1.76TB RAID10 (storage)
- 3Ware 9550SXU-8LP 64bit SATA RAID
- 2x Intel Gigabit NICs (onboard) TEAMed
- 2x Intel Gigabit NICs (PCI) TEAMed but not used (performance issues)
- apps: Active Directory, Domain Controller, Microsoft OPK, DHCP, DNS, WDS
Networking wise this is the current usage. Now I want to point out we have temporarily altered the network layout and this isn't normal. Also, we are not using managed switches, even though it is recommended (verbiage made it sound like required) by our TAM. Alas I don't get to make purchasing decisions.
Server -> 24 Port gigabit switch (Netgear) -> Netgear 24 port gigabit switch -> Switch3 & Switch4
No activity currently on Switch3
Switch 4 has 2 active connections. Connection1 is imaging with Imagex. Connection2 -> Switch5 -> Switch6
Total connected clients: 25 (24 + 0 + 1)
Client limit on server: 250
Total bandwidth being used (seen via Networking Tab in Task Manager) average: 10%
The problem is that half or 2/3 of the clients are actively imaging, using an 8GB image (using fast compression) and the other 1/3 are getting the PE transfered. None of the clients are locked up, but the data transfer has appeared to be nearly zero.
If by doing math of available bandwidth limitations (1000MB) *.10 = 100Mb/s. This gives a maximum 4Mbps (0.5MBps) per client, not counting SMB and other data.
If anyone has any ideas as to why this is happening, other than the "you should be using managed switches" angle, let me know. I totally agree we should upgrade our network in that respect. At least I can say we use the same model switches through-out. I will pull off a trace and open an SR with Microsoft.