Created this account purely to say thank you so much! Been working on this all weekend.
Vicky
Created this account purely to say thank you so much! Been working on this all weekend.
Vicky
Hi,
I'm trying to find drivers for an Intel 10G network adapter that we want to use with an HP server. The server has Ubuntu 12.04 LTS installed (3.8.0 kernel).
I've looked at the instructions on this page: Intel® PRO/100, PRO/1000 & PRO/10isGbE Network Adapter ID & Driver Guide . I get this from the lspci -nn |... command:
0a:00.0 Ethernet controller [0200]: Intel Corporation Device [8086:155d] (rev 01)
0a:00.1 Ethernet controller [0200]: Intel Corporation Device [8086:155d] (rev 01)
The 155d device is nowhere to be found in the table on that page - this is rather bizarre. Our order data says the card is a "INTEL ENET SRV BYPASS ADAPT X520-SR2". I found X520-SR2 and downloaded and installed ixgbe 3.18.7 (using sudo make install), but it doesn't seem to find any adapters. Same for 3.19.1:
$ dmesg | tail
....
[ 376.930032] Intel(R) 10 Gigabit PCI Express Network Driver - version 3.19.1
[ 376.930039] Copyright (c) 1999-2013 Intel Corporation.
[ 392.402521] init: tty1 main process ended, respawning
[ 1696.207682] Intel(R) 10 Gigabit PCI Express Network Driver - version 3.18.7
[ 1696.207689] Copyright (c) 1999-2013 Intel Corporation.
Am I doing something wrong?
Hey guys, sorry to **** in but I have a DA2 with supported intel SR optics and I haven't been able to resolve the following error...and i've given it both barrels. Any words of wisdom or encouragement?
ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 2.0.44-k2
ixgbe: Copyright (c) 1999-2010 Intel Corporation.
ixgbe 0000:0e:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
ixgbe 0000:0e:00.0: setting latency timer to 64
ixgbe 0000:0e:00.0: failed to initialize because an unsupported SFP+ module type was detected.
Reload the driver after installing a supported module.
ixgbe 0000:0e:00.0: PCI INT A disabled
ixgbe 0000:0e:00.1: PCI INT B -> GSI 18 (level, low) -> IRQ 18
ixgbe 0000:0e:00.1: setting latency timer to 64
ixgbe 0000:0e:00.1: failed to initialize because an unsupported SFP+ module type was detected.
Reload the driver after installing a supported module.
ixgbe 0000:0e:00.1: PCI INT B disabled
ixgbe 0000:17:00.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
ixgbe 0000:17:00.0: setting latency timer to 64
ixgbe 0000:17:00.0: failed to initialize because an unsupported SFP+ module type was detected.
Reload the driver after installing a supported module.
ixgbe 0000:17:00.0: PCI INT A disabled
ixgbe 0000:17:00.1: PCI INT B -> GSI 16 (level, low) -> IRQ 16
ixgbe 0000:17:00.1: setting latency timer to 64
ixgbe 0000:17:00.1: failed to initialize because an unsupported SFP+ module type was detected.
Reload the driver after installing a supported module.
ixgbe 0000:17:00.1: PCI INT B disabled
ixgbe: Unknown parameter `allow_unsupported_sfp'
-bash-4.1#
I'm using two i350 components (as two mini PCIe modules).
Every module has i350 with x1 lane of PCIe and two 10/100/1000 ethernet interfaces.
I'm using only 3 out of 4 ethernet interfaces available and I'm configuring them to 100 FULL duplex (the 4th interfaceis disabled in device manager)
I also use a 82579 on my board as10/100/1000 ethernet configured auto negotiation.
OS= win XP
During normal operation one of the 100Mbps ports disconnect for few second and connect again .
I see Event ID 27 in the event viewer of the operating system.
BIOS is updated.
What is the reason and how can I solve it.
Thank you
Raz
Hello,
Is there a way I can query the Intel PXE OROM version and/or the PXE 2.1 build number from within Windows 7?
I have installed the Proset utilities but did not see these version number in Device Manager properties of the NIC. I installed Intel Ethernet Flash Firmware Utility but it seems to be unable to detect the version. This particular Latitude E6420 has an Intel(R) 82579LM Gigabit Network Connection in Device Manager in Windows 7 32bit. I noticed the -FLASHENABLE option, but did not want to screw anything up so I have not run that. See output below:
BOOTUTILW32.EXE
Intel(R) Ethernet Flash Firmware Utility
BootUtil version 1.4.83.0
Copyright (C) 2003-2013 Intel Corporation
Type BootUtil -? for help
Port Network Address Location Series WOL Flash Firmware Version
== ============= ======== ======= === ============================= =======
1 <MAC address> 0:25.0 Gigabit YES FLASH Not Present
I have also tried using Dell CCTK and LANDesk Inventory but neither seem to have that info either.
Thanks,
Jason
Solved the problem. It was a bad patch cable.
Solved the problem. It was a bad patch cable for me.
There's a discussion on Dell's forums about the i217-LM causing a flood of IPv6 multicast traffic during sleep state. Is Intel aware of this issue? Is it a driver issue which has been addressed?
I recently updated my Microsoft Deployment Toolkit repository with the latest cabs to support new Dell laptop.
After the site is updated, I started having problem with older Dell Desktop and laptop reimaged. The problem was that the network cable show disconnected after powering up the system. The wire disconnected stayed disconnected until we physicaly disconnect the network cable and reconnect it. The 100MB full duplex setting was correct. Updating to the latest version of the driver fix the connection status problem on those older hardwares.
We now have a persistent problem with the latest system having the I217-LM NIC onboard. The disconnected wrire status problem at boot up occured and with the latest driver downloaded from Intel we have a duplex connection problem. All our Dell switches are configured to force the connection to 100MB full duplex. We then force the NIC to 100MB full duplex. When we check the connection status in the Intel advanced tool, it shows 100mb half-duplex.
After having tested all the possibilities, the only way we could achieved a stable connection to 100MB full-duplex is by setting both switch port and NIC to auto-negotiation. This is not an option for us because all the PC are daisy chain with an IP phone configured to work at 100MB full-duplex and with other NIC, auto-negotiation are not working well all the time.
I tried with older driver version of the driver available for download and the connection status at bootup occured with all the version prior 18.7 and with 18.7 version, it is impossible to force the speed to 100MB full-duplex.
Is it possible to have a quick fix for this problem? I am pretty sure it is a compatibility problem with the latest harware NIC, driver and Dell switches.
Thanks for your attention.
We have the same issue on some new HP EliteDesk 800s with the i217-LM card.
100mb full just doesn't work. The switches are hard set to 100mb/Full but the PCs are reporting 100mb/Half when we set to auto negotiate and hard set the PC to 100mb/Full. This causes collisions and we get dropped packets.
A test PC is set to 1GB (switch too) and it works fine.
Our network is 100mb/full and the switches are hard set (and we want it kept that way).
Any ideas? We want to hard set the PCs and the switch to 100mb/FULL like we have done on all 800 PCs in the building!
Was this every fixed properly? We have latest drivers from intel, but can get 100mb/Full working. Our switches are hard set to 100mb/full but the PCs (HP EliteDesk 800s) are reporting 100mb/half and we get collisions. They report half even when we hard set the nic to full
We have an issue with the networking on the built-in i217-LM Intel NIC on HP EliteDesk 800 PCs
100mb full just doesn't work. Our Allied Telesis switches are hard set to 100mb/Full but the PCs are reporting (in the NIC settings) 100mb/Half when we set the PC to auto negotiate or hard set to 100mb/Full. This causes collisions and we get dropped packets. A test PC is set to 1GB (switch too) and it works fine.
Our network is 100mb/full and the switches are hard set (and we want it kept that way). I've played around with some of the settings and had no luck. If we have the switch and PC hard set to 100mb FULL we get problems.
Any ideas? We want to hard set the PCs and the switch to 100mb/FULL like we have done on all the other PCs in the building!
We've had some success with changing the "Wait for Link" in the advances settings to Off, but then when the PC reboots the network doesn't come back up unless we unplug and re-insert the cable.
Same issue with Dell Optiplex 9020's with with I217-lm card. Auto-negotiate works fine but the network requires 100mb/Full Duplex. When changing to this setting the NIC shows as no cable connected.
Hopefully Intel will hop on this issue quickly.
I have a high throughput UDP application running on vSphere 5.1 and I am dropping packets due to rx_dma_no_resources (ethtool -S vmnic4). I have attempted to change the rx ring buffer size to 4096, but no matter what I do the highest I can seem to get is 456 (according to ethtool -G). Ethtool says the highest supported value is 4096, so does anyone know how I can increase the ring buffer size to that amount? I am using driver 3.1.32.
Thanks!
The version of the driver you have is a bit dated. I would try the latest which available from VMware at:
http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=12665&deviceCategory=io&partner=46&releases=171&deviceTypes=6&pFeatures=65&vioSolutions=Standard - IO Devices&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc - IO Devices&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc
If this still does not work, I am afraid you will need to open a customer case with VMware, they actually own this driver. If in turn VMware needs Intel's assistance, they will contact us.
Best of luck,
Patrick
We are having this same issue on our network Auto-negotiate seems to work fine only problem is some of our switches are set to 100/full.
This problem started with our latest shipment of Dell 9020's and the I217-LM card.
Intel we need a fix for this!
Hello,
I am looking to force an install of Intel Proset 18.8 where an Intel adapter is not yet present.
We are a system builder and regularly build new server images for different Microsoft server operating systems. Normal procedure is that we build a test server, install drivers, install utilities & programs such as Adobe Reader, Intel Proset etc and then capture that image using Microsoft DISM. We then offline inject drivers to take into account other possible image deployment scenarios such as different RAID drivers etc with the intention being that the image will be as hardware independent and universal as possible. All physical hardware is then detected correctly as soon as the image is deployed to a physical server.
Recently we have started creating master images using Windows Hyper-V rather than a physical server. All images are eventually fully tested with hardware but for the initial server OS install and DISM capture the use of VMs is very convenient. Installation of programs, utilities and drivers is flawless except for one small problem and that is that Intel Proset will not install because it cannot see any Intel Network adapters installed. I have tried installing some dummy Intel parts via the add hardware wizard in device manager but to no avail.
Is there a way to force an install of the software component of Intel Proset (excluding drivers) without an Intel LAN adapter being present?
Regards,
GT
Hi Patrick,
with the latest drivers you mentioned, we still see that MSI-X table for each VF has three entries, but each VF receives only two IRQs:
03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Region 0: [virtual] Memory at d1300000 (64-bit, prefetchable) [size=16K]
Region 3: [virtual] Memory at d1200000 (64-bit, prefetchable) [size=16K]
Capabilities: [70] MSI-X: Enable+ Count=3 Masked-
but still PFs and VFs receive two IRQs each:
Jan 27 02:32:00 la-01-04 kernel: [ | 4.887658] ixgbe 0000:03:00.1: (unregistered net_device): SR-IOV enabled with 32 VFs |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.911463] ixgbe 0000:03:00.1: irq 117 for MSI/MSI-X |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.911481] ixgbe 0000:03:00.1: irq 118 for MSI/MSI-X |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.913246] ixgbe 0000:03:00.1: PCI Express bandwidth of 32GT/s available |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.913251] ixgbe 0000:03:00.1: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%) |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.913335] ixgbe 0000:03:00.1: eth5: MAC: 2, PHY: 9, SFP+: 4, PBA No: FFFFFF-0FF |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.913339] ixgbe 0000:03:00.1: f8:4a:bf:56:ce:27 |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.913342] ixgbe 0000:03:00.1: eth5: Enabled Features: RxQ: 1 TxQ: 1 |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.913356] ixgbe 0000:03:00.1: eth5: Intel(R) 10 Gigabit Network Connection |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.913637] ixgbevf 0000:03:10.0: enabling device (0000 -> 0002) |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.913650] ixgbevf 0000:03:10.0: setting latency timer to 64 |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.976594] ixgbevf 0000:03:10.0: PF still in reset state. Is the PF interface up? |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.976600] ixgbevf 0000:03:10.0: Assigning random MAC address |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.976705] ixgbevf 0000:03:10.0: irq 119 for MSI/MSI-X |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.976722] ixgbevf 0000:03:10.0: irq 120 for MSI/MSI-X |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.976729] ixgbevf: eth%d: ixgbevf_init_interrupt_scheme: Multiqueue Disabled: Rx Queue count = 1, Tx Queue count = 1 |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.977184] ixgbevf: eth6: ixgbevf_probe: Intel(R) 82599 Virtual Function |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.977188] a2:b2:a3:e9:8e:d3 |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.977192] ixgbevf: eth6: ixgbevf_probe: GRO is enabled |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.977195] ixgbevf: eth6: ixgbevf_probe: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.977348] ixgbevf 0000:03:10.2: enabling device (0000 -> 0002) |
Jan 27 02:32:00 la-01-04 kernel: [ | 4.977362] ixgbevf 0000:03:10.2: setting latency timer to 64 |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.040503] ixgbevf 0000:03:10.2: PF still in reset state. Is the PF interface up? |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.040512] ixgbevf 0000:03:10.2: Assigning random MAC address |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.040632] ixgbevf 0000:03:10.2: irq 121 for MSI/MSI-X |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.040649] ixgbevf 0000:03:10.2: irq 122 for MSI/MSI-X |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.040656] ixgbevf: eth%d: ixgbevf_init_interrupt_scheme: Multiqueue Disabled: Rx Queue count = 1, Tx Queue count = 1 |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.041162] ixgbevf: eth7: ixgbevf_probe: Intel(R) 82599 Virtual Function |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.041166] 8e:03:15:99:05:5e |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.041170] ixgbevf: eth7: ixgbevf_probe: GRO is enabled |
Jan 27 02:32:00 la-01-04 kernel: [ | 5.041173] ixgbevf: eth7: ixgbevf_probe: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver |
Is a firmware upgrade perhaps needed to make MSIX table have two entries?
We have recently added Mulit-Queue support to our SR-IOV solution. Multi-Queue (MQ) uses 3 interrupts, non MQ only uses 2; however the code still requests 3. I've asked engineering to look into this and see if we can only reqeust 2 if only going to use 2.
To use MQ, and all 3 interrupts, when you compile both the PF and VF use CFLAGS_EXTRA="-DIXGBE_ENABLE_VF_MQ".
Hope this helps,
Patrick
Updating to the latest driver worked and allowed me to tune the ring size. I was using the driver that came with an HP custom image. Thanks for your help!